Skip to content

PyTorch implementation of paper: Generating the Cloud Motion Winds Field from Satellite Cloud Imagery Using Deep Learning Approach (IGARSS2021).

License

Notifications You must be signed in to change notification settings

chao-tan/CMWNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Generating the Cloud Motion Winds Field from Satellite Cloud Imagery Using Deep Learning Approach

This repository contains the source code, dataset and pretrained model for CMWNet, provided by Chao Tan.

The paper is avaliable for download here. Click here for more details.


Dataset & Pretrained Model

CMWD (Cloud Motion Wind Dataset) is the first cloud motion wind dataset for deep learning research. It contains 6388 adjacent grayscale image pairs for training and another 715 images pairs for testing. Our CMWD dataset is available for download at TianYiCloud(2.2GB) or BaiduCloud(2.2GB) (extraction code: np6o).
You can get the CMWD dataset at any time but only for scientific research. At the same time, please cite our work when you use the CMWD dataset.

The pretrained model of our CMWNet on CMWD dataset can be download at TianYiCloud or BaiduCloud (extraction code: wqk0).

Prerequisites

  • Python 3.7
  • PyTorch >= 1.4.0
  • opencv 0.4
  • PyQt 4
  • numpy
  • visdom

Training

  1. Please download and unzip TCLD dataset and place it in datasets/data folder.
  2. Generating labels for CMWD dataset.
    • Since our CMWD dataset does not explicitly give a cloud motion wind label for each image pair, you can use existing methods (such as any optical flow algorithm) to generate pixel-wise motion vectors between two frames and use it as the traing label. Please save the motion vectors as an numpy array of size (2* image_length * image_width) under the TRAIN_B and TEST_B folders respectively. The name of each generated label is the same as the input satellite image.
  3. Run python -m visdom.server" to activate visdom server.
  4. Run python run.py to start training from scratch.
  5. You can easily monitor training process at any time by visiting http://localhost:8097 in your browser.

Testing

  1. For TCLD dataset, please download and unzip pretrained model and place it in checkpoints folder.
  2. You need to modify the configs/FlowNetS.yaml file and change the status option from train to test.
  3. Run python run.py to start testing.
  4. The results of the testing will be saved in the checkpoint/FlowNetS/testing" directory.

Citation

@inproceedings{
     title={Generating the Cloud Motion Wind Field from Satellite Cloud Imagery Using Deep Learning Approach},
     author={Tan, Chao},
     booktitle={IGARSS},
     year={2021},
     note={to appear},
}

About

PyTorch implementation of paper: Generating the Cloud Motion Winds Field from Satellite Cloud Imagery Using Deep Learning Approach (IGARSS2021).

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages