Skip to content

Latest commit

 

History

History
57 lines (51 loc) · 3.84 KB

README.md

File metadata and controls

57 lines (51 loc) · 3.84 KB

Brain-arterial-segmentation

This repo utilizes U-Net to segment brain arteries from T1 weighted images. The UNet architecture was introduced for BioMedical Image segmentation by Olag Ronneberger et al. The introduced architecture had two main parts that were encoder and decoder. The encoder is all about the covenant layers followed by pooling operation. It is used to extract the factors in the image. The second part decoder uses transposed convolution to permit localization. It is again an F.C connected layers network. The original paper: CovNet for Biomedical image segmentation. Here's an example of the U-net architecture (example for 32x32 pixels in the lowest resolution) given in the publication above.

Ground truth generation

Subjects(1-20) data was collected from ForrestGump dataset. Arteries in T1 space can not be segmented with threshold value but it can be done in TOF space. So, the goal here became to register T1 in TOF space. We couldn’t register T1 in TOF directly as T1 and TOF have very different fields of view for which we register T2 in TOF as an intermediary image. Then registering T1 in T2 with T2 in TOF we were successfully able to register T1 in TOF. We also generated ground truth arterial segmentation by setting a threshold in TOF. NeuroDebian was used to preprocess the subjects before training on U-Net.

Axial view of T1 and TOF images

T1 & TOF axial view

Skull stripped T2 images using BET (Brain Extraction Tool) and aligning T2 arteries in TOF space and finally aligning T1 arteries in TOF space.

T2 in TOF space

T1 in TOF space

T1 in TOF

TOF arteries (grond truth)

TOF arteries.

The preprocessed dataset containing TOF_arteries and T1_in_TOF space for each subjects can be found from this link

Training U-net

U-net model was trained using sujects (1-15) TOF arteries.

BACKBONE = 'resnet34'
# preprocess_input = sm.get_preprocessing(BACKBONE)
model = sm.Unet(BACKBONE, input_shape=(None,None,3), encoder_weights='imagenet')
model.compile(
    'Adam',
    loss=sm.losses.bce_jaccard_loss,
    metrics=[sm.metrics.iou_score],
)

groundseg = nib.load(sub+'/tof_arteries.nii.gz').get_fdata()
t1 = nib.load(sub+'/t1_in_tof.nii.gz').get_fdata()
print('Training {}'.format(sub))
x_train, y_train, x_val, y_val = generatingInputs(t1, groundseg, sub)
model.fit(
    x=x_train,
    y=y_train,
    batch_size=16,
    epochs=100,
    validation_data=(x_val, y_val)
)

Trained model can be found from this link.

Using trained model to test on Sujects (15-20)

Loading pretrained model

model = tf.keras.models.load_model('Final model.h5', custom_objects={'binary_crossentropy_plus_jaccard_loss': sm.losses.bce_jaccard_loss, 'iou_score': sm.metrics.iou_score})

Prediction

groundseg = nib.load('sub-16/tof_arteries.nii.gz').get_fdata()
t1   = nib.load('sub-16/t1_in_tof.nii.gz').get_fdata()
t1, groundseg = reshaping(t1, groundseg)
preds = model.predict(t1)

Testing on Subject 16