Github GitHub - microsoft/Swin-Transformer: This is an official implementation f...
source link: https://github.com/microsoft/Swin-Transformer
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Swin Transformer
By Ze Liu*, Yutong Lin*, Yue Cao*, Han Hu*, Yixuan Wei, Zheng Zhang, Stephen Lin and Baining Guo.
This repo is the official implementation of "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows". It currently includes code and models for the following tasks:
Image Classification: Included in this repo. See get_started.md for a quick start.
Object Detection and Instance Segmentation: See Swin Transformer for Object Detection.
Semantic Segmentation: See Swin Transformer for Semantic Segmentation.
Updates
04/12/2021
Initial commits:
- Pretrained models on ImageNet-1K (Swin-T-IN1K, Swin-S-IN1K, Swin-B-IN1K) and ImageNet-22K (Swin-B-IN22K, Swin-L-IN22K) are provided.
- The supported code and models for ImageNet-1K image classification, COCO object detection and ADE20K semantic segmentation are provided.
- The cuda kernel implementation for the local relation layer is provided in branch LR-Net.
Introduction
Swin Transformer (the name Swin
stands for Shifted window) is initially described in arxiv, which capably serves as a
general-purpose backbone for computer vision. It is basically a hierarchical Transformer whose representation is
computed with shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention
computation to non-overlapping local windows while also allowing for cross-window connection.
Swin Transformer achieves strong performance on COCO object detection (58.7 box AP
and 51.1 mask AP
on test-dev) and
ADE20K semantic segmentation (53.5 mIoU
on val), surpassing previous models by a large margin.
Main Results on ImageNet with Pretrained Models
ImageNet-1K and ImageNet-22K Pretrained Models
name pretrain resolution acc@1 acc@5 #params FLOPs FPS 22K model 1K model Swin-T ImageNet-1K 224x224 81.2 95.5 28M 4.5G 755 - github/baidu Swin-S ImageNet-1K 224x224 83.2 96.2 50M 8.7G 437 - github/baidu Swin-B ImageNet-1K 224x224 83.5 96.5 88M 15.4G 278 - github/baidu Swin-B ImageNet-1K 384x384 84.5 97.0 88M 47.1G 85 - github/baidu Swin-B ImageNet-22K 224x224 85.2 97.5 88M 15.4G 278 github/baidu github/baidu Swin-B ImageNet-22K 384x384 86.4 98.0 88M 47.1G 85 github/baidu github/baidu Swin-L ImageNet-22K 224x224 86.3 97.9 197M 34.5G 141 github/baidu github/baidu Swin-L ImageNet-22K 384x384 87.3 98.2 197M 103.9G 42 github/baidu github/baiduNote: access code for baidu
is swin
.
Main Results on Downstream Tasks
COCO Object Detection (2017 val)
Backbone Method pretrain Lr Schd box mAP mask mAP #params FLOPs Swin-T Mask R-CNN ImageNet-1K 3x 46.0 41.6 48M 267G Swin-S Mask R-CNN ImageNet-1K 3x 48.5 43.3 69M 359G Swin-T Cascade Mask R-CNN ImageNet-1K 3x 50.4 43.7 86M 745G Swin-S Cascade Mask R-CNN ImageNet-1K 3x 51.9 45.0 107M 838G Swin-B Cascade Mask R-CNN ImageNet-1K 3x 51.9 45.0 145M 982G Swin-T RepPoints V2 ImageNet-1K 3x 50.0 - 45M 283G Swin-T Mask RepPoints V2 ImageNet-1K 3x 50.3 43.6 47M 292G Swin-B HTC++ ImageNet-22K 6x 56.4 49.1 160M 1043G Swin-L HTC++ ImageNet-22K 3x 57.1 49.5 284M 1470G Swin-L HTC++* ImageNet-22K 3x 58.0 50.4 284M -Note: * indicates multi-scale testing.
ADE20K Semantic Segmentation (val)
Backbone Method pretrain Crop Size Lr Schd mIoU mIoU (ms+flip) #params FLOPs Swin-T UPerNet ImageNet-1K 512x512 160K 44.51 45.81 60M 945G Swin-S UperNet ImageNet-1K 512x512 160K 47.64 49.47 81M 1038G Swin-B UperNet ImageNet-1K 512x512 160K 48.13 49.72 121M 1188G Swin-B UPerNet ImageNet-22K 640x640 160K 50.04 51.66 121M 1841G Swin-L UperNet ImageNet-22K 640x640 160K 52.05 53.53 234M 3230GCiting Swin Transformer
@article{liu2021Swin,
title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
journal={arXiv preprint arXiv:2103.14030},
year={2021}
}
Getting Started
- For Image Classification, please see get_started.md for detailed instructions.
- For Object Detection and Instance Segmentation, please see Swin Transformer for Object Detection.
- For Semantic Segmentation, please see Swin Transformer for Semantic Segmentation.
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.
Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK