You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 9, 2023. It is now read-only.
Hello,
I'm trying to export a semantic segmentation model to onnx, but I wasn't successful so far.
I assumed that the input dimension is NCWH. I tried C=1, and C=3. Both variants did not work for me. I made small changes to the official example:
Could someone give me hint how to solve the problem?
Andreas
import torch
import flash
from flash.core.data.utils import download_data
from flash.image import SemanticSegmentation, SemanticSegmentationData
**from torchinfo import summary**
# 1. Create the DataModule
# The data was generated with the CARLA self-driving simulator as part of the Kaggle Lyft Udacity Challenge.
# More info here: https://www.kaggle.com/kumaresanmanickavelu/lyft-udacity-challenge
download_data(
"https://github.com/ongchinkiat/LyftPerceptionChallenge/releases/download/v0.1/carla-capture-20180513A.zip",
"./data",
)
# track experimental data by using Aim
datamodule = SemanticSegmentationData.from_folders(
train_folder="data/CameraRGB",
train_target_folder="data/CameraSeg",
val_split=0.1,
transform_kwargs=dict(image_size=(256, 256)),
num_classes=21,
batch_size=4,
)
# 2. Build the task
model = SemanticSegmentation(
backbone="mobilenetv3_large_100",
head="fpn",
num_classes=datamodule.num_classes,
)
# 3. Create the trainer and finetune the model
trainer = flash.Trainer(max_epochs=1, gpus=[0])
trainer.finetune(model, datamodule=datamodule, strategy="freeze")
**summary(model, input_size=(1, 3, 256, 256))
input_sample = torch.randn((1,3,256,256))
model.to_onnx('model.onnx', input_sample, export_params=True, verbose=True)**
# 4. Segment a few images!
datamodule = SemanticSegmentationData.from_files(
predict_files=[
"data/CameraRGB/F61-1.png",
"data/CameraRGB/F62-1.png",
"data/CameraRGB/F63-1.png",
],
batch_size=3,
)
predictions = trainer.predict(model, datamodule=datamodule)
print(predictions)
# 5. Save the model!
trainer.save_checkpoint("semantic_segmentation_model.pt")
05/25/2022 09:21:22 - INFO - torch.distributed.nn.jit.instantiator - Created a temporary directory at /tmp/tmph9udwjmq
05/25/2022 09:21:22 - INFO - torch.distributed.nn.jit.instantiator - Writing /tmp/tmph9udwjmq/_remote_module_non_sriptable.py
/home/andreas/anaconda3/envs/flash/lib/python3.9/site-packages/pytorch_lightning/utilities/parsing.py:261: UserWarning: Attribute 'metrics' is an instance of `nn.Module` and is already saved during checkpointing. It is recommended to ignore them using `self.save_hyperparameters(ignore=['metrics'])`.
rank_zero_warn(
Using 'mobilenetv3_large_100' provided by qubvel/segmentation_models.pytorch (https://github.com/qubvel/segmentation_models.pytorch).
Using 'fpn' provided by qubvel/segmentation_models.pytorch (https://github.com/qubvel/segmentation_models.pytorch).
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3]
| Name | Type | Params
-----------------------------------------------------
0 | train_metrics | ModuleDict | 0
1 | val_metrics | ModuleDict | 0
2 | test_metrics | ModuleDict | 0
3 | head | FPN | 4.9 M
4 | backbone | MobileNetV3Encoder | 3.0 M
-----------------------------------------------------
1.9 M Trainable params
2.9 M Non-trainable params
4.9 M Total params
19.561 Total estimated model params size (MB)
Sanity Checking: 0it [00:00, ?it/s]/home/andreas/anaconda3/envs/flash/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:240: PossibleUserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 32 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
rank_zero_warn(
/home/andreas/anaconda3/envs/flash/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:240: PossibleUserWarning: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 32 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
rank_zero_warn(
Epoch 0: 100%|█| 1000/1000 [01:32<00:00, 10.82it/s, loss=0.197, v_num=39, train_jaccardindex_step=0.315, train_cross_entropy_step=0.135, val_jaccardindex=0.349, val
==============================================================================================================
Layer (type:depth-idx) Output Shape Param #
==============================================================================================================
SemanticSegmentation -- --
├─FPN: 1 -- --
│ └─FPNDecoder: 2 -- --
│ │ └─ModuleList: 3-1 -- 1,623,808
├─FPN: 1-1 [1, 21, 256, 256] --
│ └─MobileNetV3Encoder: 2-1 [1, 3, 256, 256] --
│ └─FPNDecoder: 2-2 [1, 128, 64, 64] --
│ │ └─Conv2d: 3-2 [1, 256, 8, 8] 246,016
│ │ └─FPNBlock: 3-3 [1, 256, 16, 16] 28,928
│ │ └─FPNBlock: 3-4 [1, 256, 32, 32] 10,496
│ │ └─FPNBlock: 3-5 [1, 256, 64, 64] 6,400
│ │ └─ModuleList: 3-6 -- (recursive)
│ │ └─MergeBlock: 3-7 [1, 128, 64, 64] --
│ │ └─Dropout2d: 3-8 [1, 128, 64, 64] --
│ └─SegmentationHead: 2-3 [1, 21, 256, 256] --
│ │ └─Conv2d: 3-9 [1, 21, 64, 64] 2,709
│ │ └─UpsamplingBilinear2d: 3-10 [1, 21, 256, 256] --
│ │ └─Activation: 3-11 [1, 21, 256, 256] --
├─MobileNetV3Encoder: 1-2 -- (recursive)
│ └─MobileNetV3Features: 2-4 -- --
│ │ └─Conv2dSame: 3-12 -- (recursive)
│ │ └─BatchNorm2d: 3-13 -- (recursive)
│ │ └─Hardswish: 3-14 -- --
│ │ └─Sequential: 3-15 -- 2,971,488
==============================================================================================================
Total params: 4,890,309
Trainable params: 1,942,757
Non-trainable params: 2,947,552
Total mult-adds (G): 2.29
==============================================================================================================
Input size (MB): 0.79
Forward/backward pass size (MB): 119.68
Params size (MB): 19.56
Estimated Total Size (MB): 140.02
==============================================================================================================
Traceback (most recent call last):
File "/mnt/data/flash_example.py", line 41, in <module>
model.to_onnx('model.onnx', input_sample, export_params=True, verbose=True)
File "/home/andreas/anaconda3/envs/flash/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/andreas/anaconda3/envs/flash/lib/python3.9/site-packages/pytorch_lightning/core/lightning.py", line 1877, in to_onnx
input_sample = self._apply_batch_transfer_handler(input_sample)
File "/home/andreas/anaconda3/envs/flash/lib/python3.9/site-packages/pytorch_lightning/core/lightning.py", line 291, in _apply_batch_transfer_handler
batch = hook(batch, dataloader_idx)
File "/home/andreas/anaconda3/envs/flash/lib/python3.9/site-packages/flash/core/data/data_module.py", line 450, in on_after_batch_transfer
transform = self._model_on_after_batch_transfer_fns[stage]
KeyError: None
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
I'm trying to export a semantic segmentation model to onnx, but I wasn't successful so far.
I assumed that the input dimension is NCWH. I tried C=1, and C=3. Both variants did not work for me. I made small changes to the official example:
https://lightning-flash.readthedocs.io/en/stable/reference/semantic_segmentation.html
Could someone give me hint how to solve the problem?
Andreas
Beta Was this translation helpful? Give feedback.
All reactions