Multi-Modality Whole Heart Segmentation (MMWHS) 数据集[1] 是多模态医疗图像数据集,有磁共振(Magnetic Resonance Imaging,MRI)和断层扫描(Computed Tomography,CT)两种,[2] 对数据形式有一些简单介绍。
原数据可在 [1/project] 下载:
- training image & labels, test images: MM-WHS 2017 Dataset
- test labels: MMWHS_evaluation_testdata_label_encrypt_1mm_forpublic.zip
其中 training set 有 MR、CT 各 20 份 scans,test 各 40 份。近来一些 medical domain-adaptation segmentation 的工作 [4-9] 用的都是 [3] 预处理的数据,本篇也是用这份数据(另:[19] 给了 Abdominal 数据集的预处理数据链接)。下载:
- training & val:PnpAda_release_data.zip
- MR test:test_mr_image&labels.zip
- CT test:test_ct_image&labels.zip
都解压到 mmwhs/ 下,得:
mmwhs/
|- PnpAda_release_data/
| |- ct_train_tfs/
| | |- ct_train_slice0.tfrecords # image 和 label
| |- ct_val_tfs/
| | |- ct_val_slice0.tfrecords
| |- mr_train_tfs/
| | |- mr_train_slice0.tfrecords
| `- mr_val_tfs/
| |- mr_val_slice0.tfrecords
|- test_mr_image&labels/
| |- gth_ct_1003.nii.gz # label
| |- image_ct_1003.nii.gz # image
`- test_ct_image&labels/|- gth_mr_1007.nii.gz|- image_mr_1007.nii.gz
Splitting, Order
[3] 数据的划分与原数据[1]不同,是只用了原数据的 training set,其中各随机选 4 scans 做 test set,剩下 16 scans 做 training 和 val set。
- training & val 是 tfrecords 格式,每个 tfrecords 文件(image、label 都是)都是 3 张连续的 slices:前驱 slice、核心 slice、后继 slice。所以不同 tfrecords 文件是有重复的。
- test 还是 nii.gz 格式,可用 nibabel[10]、sitk[11]、medpy[12] 读。
因为 training & val 的数据已沿 coronal 轴(参考 [2],即 z 轴)分成 slices,且由 [13],slices 是打乱的,文件名也没保留相关信息,所以数据顺序、选了哪些做 val set 都不知道。
Preprocessing
数据预处理可见 [4/code] 的 readme、[14]。
[3] 给的 tfrecords、nii.gz 文件都是预处理过的,可与 [1] 的原数据对比得知:原数据 image 数值范围明显大很多,原始 HU 值范围可达负几千至正几千[2],而 [3] 给的都在 [-5, 5] 之内(后面有相关验证)。
Loading
读 tfrecords 数据(转成 numpy)的方法参考 [4,5,8,9,15],分用、不用 eager 模式两种,主要代码来自 [8] 的 convert_tfrecords.py。此节代码用 TensorFlow 2.14.0。
eager
import os, os.path as osp
import numpy as np
import tensorflow as tf
if '2' == tf.__version__.split('.')[0]:tf = tf.compat.v1# decode tfrecords 的格式,来自 [3] 的 README
FEATURES = {'dsize_dim0': tf.FixedLenFeature([], tf.int64),'dsize_dim1': tf.FixedLenFeature([], tf.int64),'dsize_dim2': tf.FixedLenFeature([], tf.int64),'lsize_dim0': tf.FixedLenFeature([], tf.int64),'lsize_dim1': tf.FixedLenFeature([], tf.int64),'lsize_dim2': tf.FixedLenFeature([], tf.int64),'data_vol': tf.FixedLenFeature([], tf.string),'label_vol': tf.FixedLenFeature([], tf.string)
}
# image、label 的 shape,3 是因为取了连续三片
SIZE = [256, 256, 3]def _parse(example_proto):return tf.io.parse_single_example(example_proto, FEATURES)def read_tfrecord(f):dataset = tf.data.TFRecordDataset(f).map(_parse)for data in dataset:# print(type(data)) # dictimg = tf.decode_raw(data['data_vol'], tf.float32).numpy()label = tf.decode_raw(data['label_vol'], tf.float32).numpy()img = img.reshape(SIZE)label = label.reshape(SIZE)return img, label# 读
SRC = "PnpAda_release_data/mr_val_tfs"
# 顺便保存(后面对拍顺序)
DEST = "mr_val_eager"
os.makedirs(DEST, exist_ok=True)
for f in os.listdir(SRC):img, label = read_tfrecord(osp.join(SRC, f))np.save(osp.join(DEST, osp.splitext(f)[0]), img[:, :, 1]) # 只存中间的 slice
non-eager
import os, os.path as osp
import numpy as np
import tensorflow as tf
if '2' == tf.__version__.split('.')[0]:tf.compat.v1.disable_v2_behavior() # tf1 风格读数据要加这句tf = tf.compat.v1FEATURES = {'dsize_dim0': tf.FixedLenFeature([], tf.int64),'dsize_dim1': tf.FixedLenFeature([], tf.int64),'dsize_dim2': tf.FixedLenFeature([], tf.int64),'lsize_dim0': tf.FixedLenFeature([], tf.int64),'lsize_dim1': tf.FixedLenFeature([], tf.int64),'lsize_dim2': tf.FixedLenFeature([], tf.int64),'data_vol': tf.FixedLenFeature([], tf.string),'label_vol': tf.FixedLenFeature([], tf.string)
}
SIZE = [256, 256, 3]# 读
SRC = "PnpAda_release_data/mr_val_tfs"
# 顺便保存(后面对拍顺序)
DEST = "mr_val_non-eager"
os.makedirs(DEST, exist_ok=True)files = os.listdir(SRC)
files = [osp.join(SRC, f) for f in files] # 要绝对路径,否则后面报错找不到文件file_queue = tf.train.string_input_producer(files, shuffle=False) # 关 shuffle,否则顺序不同
reader = tf.TFRecordReader()
_, serialized_example = reader.read(file_queue)
data = tf.parse_single_example(serialized_example, features=FEATURES)img_vol = tf.decode_raw(data['data_vol'], tf.float32)
label_vol = tf.decode_raw(data['label_vol'], tf.float32)
img_vol = tf.reshape(img_vol, SIZE)
label_vol = tf.reshape(label_vol, SIZE)with tf.Session() as sess:sess.run(tf.initialize_all_variables())coord = tf.train.Coordinator()threads = tf.train.start_queue_runners(sess=sess, coord=coord)for f in files: # 手动限长,否则会一直循环读img, label = sess.run([img_vol, label_vol])np.save(osp.join(DEST, osp.splitext(osp.basename(f))[0]), img[:, :, 1]) # 也是只存中间的 slicecoord.request_stop()coord.join(threads)
comparison
两种读法应该读出来的结果一样,包括顺序,对拍:
import os, os.path as osp
import numpy as np# 前面保存的 mr_val 数据
P1 = "mr_val_eager"
P2 = "mr_val_non-eager"
for f in os.listdir(P1):im1 = np.load(osp.join(P1, f))im2 = np.load(osp.join(P2, f))assert (im1 != im2).sum() == 0, f
print("DONE")
- 结论:一致
Statistics
[4,5] 的 README 说要张数据变换到 [-1, 1],而其代码 data_loader.py 是用 min-max scaling 做的,其中 image 的最小、最大值为:
- MR:
-1.8
,4.4
- CT:
-2.8
,3.2
由 [17],这些值是仅由 tfrecords 文件导出的,验证:
tfrecords
import os, os.path as osp, math
import tensorflow as tf
if '2' == tf.__version__.split('.')[0]:tf.compat.v1.disable_v2_behavior()tf = tf.compat.v1for m in ("mr", "ct"):max_v, min_v = -math.inf, math.inffor sub in ("train", "val"):d = f"{m}_{sub}_tfs"max_v_sub, min_v_sub = -math.inf, math.inffiles = os.listdir(d)files = [osp.join(d, f) for f in files]file_queue = tf.train.string_input_producer(files, shuffle=False)reader = tf.TFRecordReader()_, serialized_example = reader.read(file_queue)parser = tf.parse_single_example(serialized_example, features=features)img_vol = tf.decode_raw(parser['data_vol'], tf.float32)img_vol = tf.reshape(img_vol, [256, 256, 3])with tf.Session() as sess:sess.run(tf.initialize_all_variables())coord = tf.train.Coordinator()threads = tf.train.start_queue_runners(sess=sess, coord=coord)for f in files:img = img_vol.eval()max_v_sub = max(max_v_sub, img.max())min_v_sub = min(min_v_sub, img.min())print(f, end='\r')coord.request_stop()coord.join(threads)print(f"\n{m}, {sub}, min:", min_v_sub, ", max:", max_v_sub)max_v = max(max_v, max_v_sub)min_v = min(min_v, min_v_sub)print(f"\n{m}, min:", min_v, ", max:", max_v)
- 输出
mr, train, min: -1.7675079 , max: 4.3754067
mr, val, min: -1.511309 , max: 3.2670646
mr, min: -1.7675079 , max: 4.3754067ct, train, min: -2.731593 , max: 3.0706542
ct, val, min: -2.4145143 , max: 2.2560735
ct, min: -2.731593 , max: 3.0706542
nii.gz
import os, os.path as osp, math
import medpy.io as mediofor m in ("mr", "ct"):max_v, min_v = -math.inf, math.infd = f"test_{m}_image&labels"for f in os.listdir(d):if not f.startswith("image_"): continueprint(f, end='\r')im, _ = medio.load(osp.join(d, f))max_v = max(max_v, im.max())min_v = min(min_v, im.min())print('\n', m, min_v, max_v)
- 输出
mr -1.1368891922215185 2.6575754759544323
ct -1.763460938640936 2.368554272081745
结论:代码用的基本跟 tfrecords 导出的一致。另 [18] 指出 Abdominal 用的 min、max value。
References
- (MIA 2019) Evaluation of algorithms for Multi-Modality Whole Heart Segmentation: An open-access grand challenge - paper, project, paper with code
- 医疗图像分割指标
- (arXiv 2018) PnP-AdaNet: Plug-and-play adversarial domain adaptation network with a benchmark at cross-modality cardiac segmentation - paper, github
- (AAAI 2019) Synergistic Image and Feature Adaptation: Towards Cross-Modality Domain Adaptation for Medical Image Segmentation - paper, code
- (TMI 2020) Unsupervised Bidirectional Cross-Modality Adaptation via Deeply Synergistic Image and Feature Alignment for Medical Image Segmentation - paper, code
- (JBHI 2020) Margin Preserving Self-Paced Contrastive Learning Towards Domain Adaptation for Medical Image Segmentation - paper, code
- (MICCAI 2021) MT-UDA: Towards Unsupervised Cross-modality Medical Image Segmentation with Limited Source Labels - paper, code
- (TMI 2021) Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain Adaptation - paper, code
- (MICCAI 2022) Attention-Enhanced Disentangled Representation Learning for Unsupervised Domain Adaptation in Cardiac Segmentation - paper, code
- nipy/nibabel, NiBabel
- SimpleITK/SimpleITK, SimpleITK
- loli/medpy, MedPy
- 数据集预处理 #49
- The pre-processing of the original data #9 -> The Preprocess Data Issue #12
- tsmatz/tensorflow-mnist-batch-read-and-train-tutorial
- tf.compat.v1.train.string_input_producer
- About the minmax value #11
- Min Max value to normalize Abdominal datasets. #56
- Prepocessed Abdominal Data #51