5

机器视觉:SAM(Segment Anything Model)模型推出了,这是计算机视觉(CV)的一个里...

 1 year ago
source link: https://www.taholab.com/26240
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

视频介绍:视频地址:https://www.bilibili.com/video/BV11s4y1N7Hv/

Github项目主页:https://github.com/facebookresearch/segment-anything

论文下载地址:https://arxiv.org/abs/2304.02643 { [Paper] [Project] [Demo] [Dataset] [Blog] [BibTeX] }

打开这里自己可以尝试:https://segment-anything.com/demo

我的尝试结果:(左边是分割图,右边是原图)

1680918324.jpg
v2-42c52f7a3480e1d0152827ad0a411bcd_r.jpg

以下内容来自Github上的项目介绍

Segment Anything

Meta AI Research, FAIR

Alexander KirillovEric MintunNikhila RaviHanzi Mao, Chloe Rolland, Laura Gustafson, Tete XiaoSpencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr DollarRoss Girshick

[Paper] [Project] [Demo] [Dataset] [Blog] [BibTeX]

SAM design

The Segment Anything Model (SAM) produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a dataset of 11 million images and 1.1 billion masks, and has strong zero-shot performance on a variety of segmentation tasks.

masks1.png

 

masks2.jpg

Installation

The code requires python>=3.8, as well as pytorch>=1.7 and torchvision>=0.8. Please follow the instructions here to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended.

Install Segment Anything:

pip install git https://github.com/facebookresearch/segment-anything.git

or clone the repository locally and install with

git clone [email protected]:facebookresearch/segment-anything.git
cd segment-anything; pip install -e .

The following optional dependencies are necessary for mask post-processing, saving masks in COCO format, the example notebooks, and exporting the model in ONNX format. jupyter is also required to run the example notebooks.

pip install opencv-python pycocotools matplotlib onnxruntime onnx

Getting Started

First download a model checkpoint. Then the model can be used in just a few lines to get masks from a given prompt:

from segment_anything import SamPredictor, sam_model_registry
sam = sam_model_registry[""](checkpoint="<path/to/checkpoint>")
predictor = SamPredictor(sam)
predictor.set_image()
masks, _, _ = predictor.predict()

or generate masks for an entire image:

from segment_anything import SamAutomaticMaskGenerator, sam_model_registry
sam = sam_model_registry[""](checkpoint="<path/to/checkpoint>")
mask_generator = SamAutomaticMaskGenerator(sam)
masks = mask_generator.generate()

Additionally, masks can be generated for images from the command line:

python scripts/amg.py --checkpoint <path/to/checkpoint> --model-type  --input  --output <path/to/output>

See the examples notebooks on using SAM with prompts and automatically generating masks for more details.

notebook1.png

 

notebook2.png

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK