GitHub - gaopengcuhk/Unofficial-Pix2Seq: Unofficial implementation of Pix2SEQ
source link: https://github.com/gaopengcuhk/Unofficial-Pix2Seq
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Unofficial-Pix2seq: A Language Modeling Framework for Object Detection
Unofficial implementation of Pix2SEQ. Please use this code with causion. Many implemtation details are not following original paper and significantly simplified.
This project aims for a step by step replication of Pix2Seq starting from DETR codebase.
Step 1
Starting from DETR, we add bounding box quantization over normalized coordinate, sequence generator from normalized coordinate, auto-regressive decoder and training code for Pix2SEQ.
How to use?
Install packages following original DETR and command line is same as DETR.
By setting image size to 512, each epoch takes 3 minutes on 8 A100 GPU.
python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --coco_path ../../data/coco/
Released at 8pm, 26th, Seq
Problem to be solved : 1) better logging 2) correct padding, end of sentence, start of sentence token 3) efficient padding 4) better organization of code 5) fixed order of bounding box 6) shared dictionary between position and category
Released at 10pm, 26th, Seq
Problem to be solved: 1) better organization of code 2) fixed order of bounding box
Step 2
Finish inference code of pix2seq and report performance on object detection benchmark. Note that we are going to write an inefficent greedy decoding. The progress can be significantly accelerated by following cache previous state in Fairseq. The quality can be improved by nucleus sampling and beam search. We leave these complex but engineering tricks for future implementation and keep the project as simple as possible for understanding language modeling object detection.
python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --coco_path ../../data/coco/ --eval --resume checkpoint.pth --batch_size 4
After 30 epoches training, our replication of pix2seq can achieve 12.1 mAP on MSCOCO. Image resolution 512 for fast training.
COCO bbox detection val5k evaluation with maximum 25 boundingx box predictions (Original paper 100 bounding box):
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.121
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.239
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.107
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.007
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.091
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.267
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.144
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.166
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.166
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.011
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.128
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.350
After 107 epoches training, our replication of pix2seq can achieve 17.9 mAP on MSCOCO. Image resolution 512 for fast training. Checkpoint can be downloaded at here.
COCO bbox detection val5k evaluation with maximum 25 boundingx box predictions (Original paper 100 bounding box):
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.179
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.314
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.177
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.021
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.157
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.375
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.191
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.233
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.233
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.028
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.210
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.469
After 213 epoches training, our replication of pix2seq can achieve 26.4 mAP on MSCOCO. Image resolution 1333 after 150 epochs for better training. Checkpoint can be downloaded at here.
COCO bbox detection val5k evaluation with maximum 25 boundingx box predictions (Original paper 100 bounding box):
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.264
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.423
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.273
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.080
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.287
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.454
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.254
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.334
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.334
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.102
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.356
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.556
Observation
(1). The sequence is tend to generate End of Sentence(EOS) early. After generating EOS token, langauge modeling will still genrate boudning box. (2). Repeatable sequence which is a common problem in seq2seq modeling. (3) end prediction when meet with 'EOS' token will generate high Precision but low recall.
High Precision, Low Recall when use EOS, 512 resolution, maxmium 20 boudning box
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.199
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.346
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.197
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.032
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.189
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.401
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.208
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.265
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.265
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.044
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.251
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.519
Low Precision, High Recall when ignore EOS, 512 resolution, maxmium 20 boudning box
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.193
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.345
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.184
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.025
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.191
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.386
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.217
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.296
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.301
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.057
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.301
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.566
Low Precision, High Recall when ignore EOS, 512 resolution, maxmium 40 boudning box
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.192
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.341
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.187
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.027
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.192
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.386
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.215
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.298
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.306
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.064
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.311
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.562
Low Precision, High Recall when ignore EOS, 512 resolution, maxmium 60 boudning box
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.192
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.340
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.187
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.027
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.192
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.386
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.215
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.299
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.307
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.065
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.313
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.563
Position Distribution (Multiple plausable position):
Released at 10am, 28th, Seq
Problem to be solved : 1). Add sequence likelihood evaluationn on validation dataset. 2) Better organization of code. 3) FP16 support. 4) Beam Search
Step 3
Add tricks proposed in Pix2SEQ like droplayer, bounding box augmentation, multiple crop augmentation and so on.
Acknowledegement
This codebase heavily borrow from DETR, CART, minGPT and Fairseq and motivated by the method explained in Pix2Seq
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK