9

Completely free Cloud native microservice for file storage and processing

 4 years ago
source link: https://github.com/postput/postput/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

EZfEneu.png!web

Postput: Cloud native storage operator

Upload, download and perform operations on the fly on your files

Postput is a microservice that sits between your file storage and your end-users.

Its primary function is toperform various operations on your files.

It can also be used to simplify the way you download/upload files withvarious storage providers.

Table of Contents

Demo

FFrQJnZ.gif

TL;DR

1. Launch the full stack:

wget https://raw.githubusercontent.com/postput/postput/master/docker-compose.yaml -O postput-docker-compose.yaml && \
docker-compose  -f postput-docker-compose.yaml up

2. Upload any kind of file

curl -F '[email protected]' http://localhost:2000/my_memory_files?name-override=docker-compose.yaml

2. Download the file you've just uploaded

curl http://localhost:2000/my_memory_files/docker-compose.yaml

3. Upload an image by providing its URL

curl -X POST http://localhost:2000/my_memory_files/\?url=https://i2-prod.mirror.co.uk/incoming/article14334083.ece/ALTERNATES/s810/3_Beautiful-girl-with-a-gentle-smile.jpg\&name-override=my-image.jpg

4. Resize, blur, rotate, round and optimize this image on the fly. Operations are applied in the order they appear in the request.

curl http://localhost:2000/my_memory_files/my-image.jpg\?resize=300,300&blur=5&rotate=90&mask=elipse&format=webp

5. Createat: http://localhost:2002

Operations Available

Operations are applied one after another. Keep in mind that order may matters depending on which operations you do!

Query parameter Description Example Query parameter Description Example resize Resize the image accoring to specified value ?resize=200
?resize=10,20 Rotate Rotate the image to the degree specified ?rotate=90 blur Perform a Gaussian blur on the image ?blur=5 mask Apply a mask on the image. Currently, only elipse mask is supported. ?mask=elipse format Change the format of the image. supported values: jpeg, jpg, png, gif, webp, tiff, raw. ?format=webp

Supported Storage Provider

See storage reference

  • Google cloud storage (GCS)
  • Spaces (DigitalOcean)
  • All s3 compliant storages

Amazon (S3)

{  
  "custom": {  
  "keyId": "AKXXXXXXXXXXXXXXXXXX",  
  "key": "XCKlXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",  
  "container": "mycontainer",  
  "region": "us-east-1"  
  },  
  "allowUpload": true,  
  "urls": ["http://localhost:2000/"]  
}

Google Cloud Storage (GCS)

{  
  "custom": {  
  "keyFile": {  
  "type": "service_account",  
  "project_id": "xxxxxxx-xxxx-xxx",  
  "private_key_id": "2bxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",  
  "private_key": "-----BEGIN PRIVATE KEY-----\nXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n-----END PRIVATE KEY-----\n",  
  "client_email": "[email protected]",  
  "client_id": "xxxxxxxxxxxxxxxxxxxxx",  
  "auth_uri": "https://accounts.google.com/o/oauth2/auth",  
  "token_uri": "https://accounts.google.com/o/oauth2/token",  
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",  
  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxgserviceaccount.com"  
  },  
  "container": "mycontainer",  
  "projectId": "xxxxxxxx-xxxx-xxxx"  
  },  
  "allowUpload": true,  
  "urls": ["http://localhost:2000/"]  
}

Spaces (DigitalOcean)

{  
  "custom": {  
  "endpoint": "fra1.digitaloceanspaces.com",  
  "bucket": "mybucket",  
  "accessKeyId": "XXXXXXXXXXXXXXXXXXXX",  
  "secretAccessKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"  
  },  
  "allowUpload": true,  
  "urls": ["http://localhost:2000/"]  
}

Openstack

{  
  "custom": {  
  "username": "xxxxxxxxxxxx",  
  "password": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",  
  "tenantId": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",  
  "region": "xxxx",  
  "authUrl": "https://auth.cloud.ovh.net/",  
  "version": "v2.0",  
  "container": "mycontainer"  
  },  
  "allowUpload": true,  
  "urls": ["http://localhost:2000/"]  
}

Azure

{
        "custom": {
          "storageAccount": "my-storage-account",
          "storageAccessKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==",
          "container": "xxxxxxxxx"
        },
        "allowUpload": true,
        "urls": ["http://localhost:2000/", "https://www.my-other-domain.com"]
      }

Backblaze

{
        "custom": {
          "applicationKeyId": "qsd5f46qs54fd654q6sdf46q5s",
          "applicationKey": "fq6sd5f65q4sg654sf6g54sfd65g4s6",
          "bucketName": "mybucketname",
          "bucketId": "sqd6f56sqd4f65s4df654sq6fd4"
        },
        "allowUpload": true,
        "urls": ["http://localhost:2000/", "https://www.my-other-domain.com"]
      }

Scaleway

{  
  "custom": {  
  "endpoint": "s3.fr-par.scw.cloud",  
  "bucket": "mybucket",  
  "accessKeyId": "XXXXXXXXXXXXXXXXXXXXX",  
  "secretAccessKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"  
  },  
  "allowUpload": true,  
  "urls": ["http://localhost:2000/"]  
}

In memory

Allows you to store your file in memory. All files stores will be lost upon restart. It's mostly usefull for testing and demo purpose

{  
  "custom": {},  
  "allowUpload": true,  
  "urls": ["http://localhost:2000/"]  
}

Filesystem

No configuration is required. By default, files will be stored on the public folder.

{  
  "custom": {},  
  "allowUpload": true,  
  "urls": ["http://localhost:2000/"]  
}

If your files are located in a specific folder, you can specify its path, be it relative or absolute.

Relative path (the root is the root of the api project)

{  
  "custom": {  
  "path": "public/default"  
  },  
  "allowUpload": true,  
  "urls": ["http://localhost:2000/"]  
}

Absolute path

{  
  "custom": {  
  "path": "/tmp"  
  },  
  "allowUpload": true,  
  "urls": ["http://localhost:2000/"]  
}

Webfolder

! Upload is not supported for that kind of storage
{  
  "custom": {  
  "method": "GET",  
  "uri": "https://www.yoururl.com",  
  "qs": {  
  "my-first-query-string": "myvalue",  
  "my-second-query-string": "my-second-value"  
  }  
 },   
  "urls": ["http://localhost:2000/"]  
}

Proxy

! Upload is not supported for that kind of storage

It is not a storage provider in itself. A Proxy allows you to connect to any file that is addressable through a publicly-available URL.

{  
  "custom": {  
  "allowedHosts": ["storage.speaky.com"]  
 },  
  "urls": ["http://localhost:2000/"]  
}

Webfolder : upload not allowed on webfolder. Proxy: Will only serve as a transformer All s3 compliant storages

All s3 compliant storages

Postput can integrate with every type of s3 storage provider. Wise readers may have noticed that Scaleway and Spaces are actually S3 compliant and have thus the same config pattern. If you want to integrate another kind of s3 complient storage, just adapt the endpoint accordingly.

{  
  "custom": {  
  "endpoint": "my.endpoint.com",  
  "bucket": "mybucket",  
  "accessKeyId": "XXXXXXXXXXXXXXXXXXXXX",  
  "secretAccessKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"  
  },  
  "allowUpload": true,  
  "urls": ["http://localhost:2000/"]  
}

Install

Clone this git repo with its dependencies.

git clone --recursive [email protected]:postput/postput.git
git clone --recursive https://github.com/postput/postput.git

Start the whole stack (API + admin backend and frontend)

cd postput
docker-compose up

Run it locally

You may want to run each service independently in your computer to be able to see what's happenning under the hood. Good news: each microservice can be started independently.

Preriquisites:

  • node.js > 8
  • Postgresql database up and running. You can spawn one very easily thanks to the docker-compose file provided at the root of this repository.
docker-compose up postput_db

Launch the API and the admin backend

Once started, you can start the API and the admin-backend. You can customize database connexion parameters with environment variables.

Start the API:

cd api
npm i
export POSTGRESQL_PORT=5555
export LISTEN_PORT=1999
npm start

Start the admin-backend

cd admin-backend
npm i
export POSTGRESQL_PORT=5555
export LISTEN_PORT=1998
npm start

Launch the admin frontend

Once the admin backend is started, you can start the admin frontend that allows you to create, read, update and delete storages (CRUD). The frontend is designed to communicate with the backend so you must configure it so that it will hit the admin-backend on the port it is listening :in this case.

admin-frontend

cd admin-frontend
npm i
ng serve -c debug

The debug configuration is a profile that will make the frontend hit the port 1998 so it will just work out of the box. Unfortunately, angular does not play well with environment variable so if you want/have to modify this port, you'll have to modify the debug profile or create another one. The file to modify is located at : https://raw.githubusercontent.com/postput/admin-frontend/master/src/environments/environment.prod.ts

Configuration

Api configuration

Global configuration

Environment variable

Environment variable Description Example Environment variable Description Example ENV set the ENV for your nodejs development | production LISTEN_PORT The port listened 2000 URLS List of urls that will be appened at the end of each storage configuration. ["http://localhost:2000/", " https://www.my-other-url.com "] ALLOW_UPLOAD Specify if upload is allowed to every storages. This config can be overriden by storage specify configuration. ["http://localhost:2000/", " https://www.my-other-url.com "] POSTGRESQL_HOST Postgresql host localhost POSTGRESQL_PORT Postgresql port 5432 POSTGRESQL_USERNAME Postgresql username postput POSTGRESQL_PASSWORD Postgresql password postput POSTGRESQL_DATABASE Postgresql database postput SEQUELIZE_FORCE_SYNC Force deletion of table if they exists before creating them. You want to disable that on production. true

Storage specific configuration

All parameters listed below are config keys that must be put at the root of your storage configuration. See Storage reference

Config key Description Example Config key Description Example allowUpload Specify if upload is allowed for that storage. It takes precedence over the global configuration true urls Storage specific urls. ["http://localhost:3000/"] custom Storage specific config, seeSupported Storage Provider {}

How to

How do I create my custom storage provider?

With fixtures (Recommended)

Postput API is designed to sync every json file it finds in the data/storage directory with the database every time it starts.

This is the preferable method if you plan to use postput on production because you'll have consistent storage info upon restart even if you decide to modify/reset your postgresql instance.

You'll have to create a json file that follow the storage reference in the data/storage directory

The method you use to create that file is left to you. I recommend to create that file using the secret/mountpath feature if you use kubernetes.

If you don't use kubernetes, you can still create a docker image based on the api image (Image built with this docker file)

Dockefile

FROM postput/api
COPY my-storage.json ./data/storage/

With Postgresql

The fastest way to create a storage if you're familiar with SQL. You can use any database tool that supports Postgresql like DBeaver or navicat There is only 2 tables : Storage and StorageType. the only table that you want to modify is Storage.

With the admin dashboard

The easiest way to create your storage. Simply access the admin frontend. If you started the project using the docker-compose of thesection, it is located at localhost:2002

How do I deploy it on production?

On Kubernetes

With Helm (Recommended)

Use a Helm chart [available soon]

FAQ

My files are already stored on amazon S3. Do I have to migrate them if I want to use postput?

No, postput is not a storage in itself. The only thing you have to do is to tell postput where is your bucket bycreating an S3 storage providerandprovide the right config for amazon s3

Roadmap

By end of November 2019:

  • Implement Helm chart for Kubernetes cluster

By end of December 2019:

  • Implement Webhooks capabilities
  • Face detection ( face-api? opencv?)

Credits

Most operations (,,,...) are done by the sharp library. Cloud storage integration is done with the help of pkgcloud, aws-s3 and backblaze-b2 . For the specific memory storage, I use memfs Sequelize helps me with database communication


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK