5

使用Docker-compose来封装celery4.1+rabbitmq3.7服务,实现微服务架构

 1 year ago
source link: https://v3u.cn/a_id_115
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

使用Docker-compose来封装celery4.1+rabbitmq3.7服务,实现微服务架构

首页 - Mac & Linux/2019-09-28

    大家都知道,Celery是一个简单、灵活且可靠的,处理大量消息的分布式系统,在之前的一篇文章中:python3.7+Tornado5.1.1+Celery3.1+Rabbitmq3.7.16实现异步队列任务 详细阐述了如何进行安装部署和使用,但是过程太繁琐了,先得安装Erlang,再安装rabbitmq,然后各种配置,最后由于async关键字问题还得去修改三方库的源码,其实我们可以通过docker来将celery服务封装成镜像,如此一来,以后再使用celery或者别的系统依赖celery,我们只需要将该镜像以容器的形式跑服务即可,不需要繁琐的配置与安装。

    首先新建celery_with_docker文件夹,cd celery_with_docker

    建立dockerfile文件

FROM python
LABEL author="liuyue"
LABEL purpose = ''


RUN apt update
RUN pip3 install setuptools

ENV PYTHONIOENCODING=utf-8

# Build folder
RUN mkdir -p /deploy/app
WORKDIR /deploy/app
#only copy requirements.txt.  othors will be mounted by -v
#COPY app/requirements.txt /deploy/app/requirements.txt
#RUN pip3 install -r /deploy/app/requirements.txt
RUN pip3 install celery

# run sh. Start processes in docker-compose.yml
#CMD ["/usr/bin/supervisord"]
CMD ["/bin/bash"]

    意思是基础镜像我们使用python,然后安装celery

    然后新建docker-compose.yml

# Use postgres/example user/password credentials
version: '3.4'

services:
    myrabbit:
        #restart: always
        #build: rabbitmq/
        image: rabbitmq:3-management
        # hostname: rabbit-taiga
        environment:
            RABBITMQ_ERLANG_COOKIE: SWQOKODSQALRPCLNMEQG
            # RABBITMQ_DEFAULT_USER: "guest"
            # RABBITMQ_DEFAULT_PASS: "guest"
            # RABBITMQ_DEFAULT_VHOST: "/"
            # RABBITMQ_NODENAME: taiga
            RABBITMQ_DEFAULT_USER: liuyue
            RABBITMQ_DEFAULT_PASS: liuyue
        ports:
            - "15672:15672"
            # - "5672:5672"
    
    api:
        #restart: always
        stdin_open: true
        tty: true
        build: ./
        image: celery-with-docker-compose:latest
        volumes:
            - ./app:/deploy/app
        ports:
            - "80:80"
        command: ["/bin/bash"]

    celeryworker:
        image: celery-with-docker-compose:latest
        volumes:
            - ./app:/deploy/app
        command: ['celery', '-A', 'tasks', 'worker', '-c', '4', '--loglevel', 'info']
        depends_on:
            - myrabbit

    这个配置文件的作用是,单独拉取rabbitmq镜像,启动rabbitmq服务,用户名和密码为:liuyue:liuyue然后在镜像内新建一个celery工程,目录放在/deploy/app,随后通过挂载文件夹的方式将宿主的app目录映射到/deploy/app,最后启动celery服务

    最后,我们只需要在宿主机建立一个app文件夹,新建一些任务脚本即可

    新建tasks.py

from celery import Celery

SERVICE_NAME = 'myrabbit' 
app = Celery(backend = 'rpc://', broker = 'amqp://liuyue:liuyue@{0}:5672/'.format(SERVICE_NAME))


@app.task
def add(x, y):
    print(123123)
    return x + y

     新建任务调用文件test.py

import time
from tasks import add
# celery -A tasks worker -c 4 --loglevel=info


t1 = time.time()
result = add.delay(1, 2)
print(result.get())
 
print(time.time() - t1)

     最后项目的目录结构是这样的

    

20190928032556_35451.png

    随后在项目根目录执行命令:docker-compose up --force-recreate

    此时celery和rabbitmq服务已经启动

    进入浏览器 http://localhost:15672 用账号登录 liuyue:liuyue

    

20190928032837_19463.png

    没有问题,此时我们进入容器内部

docker exec -i -t celery-with-docker-compose-master_api_1 /bin/bash

    可以看到,容器内已经通过挂载将宿主机的app文件夹共享了进来

    

20190928033050_57667.png

    随后我们执行异步任务:python3 test.py

    

20190928033137_25339.png

    可以看到执行成功了

    

20190928033233_39707.png

    由此可知,在宿主机,什么环境都不需要配置,只需要安装一个docker即可,异步任务队列的搭建和执行全部在docker的内部容器内,完全隔绝,只是具体的代码和脚本通过docker的挂载命令来在宿主机编写,也就是研发人员只需要在宿主机专注编写代码,而不需要管配置和部署的问题。

    最后,附上项目的完整代码:https://gitee.com/QiHanXiBei/celery-with-docker-composer


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK