1

The case against serverless

 2 years ago
source link: https://blog.cyborch.com/the-case-against-serverless/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

The case against serverless

Friday, February 11, 2022 4 min read

Has lambda been superseded by kubernetes? Are we better off with the versatility of just instantiating a docker image than being forced into lambda protocols?

serverless.jpg

There has been a movement towards serverless for some time now. The argument is that horizontal scaling of services is something that humans should not spend their valuable time concerning themselves with.

There are a large number of solutions out there to autoscale services on demand.

Say, you want to create a "Hello World" service (who wouldn't) and make it able to service arbitrarily many requests on demand, but you don't want to spend money running a lot of servers which will just be idling between load peaks.

The service could look something like this:

import express, { Request, Response } from 'express';

const app = express();

app.get('/', (res: Request, res: Response) => {
  res.send('Hello World!');
});

app.listen(3000);

This is a nice web service which has the clear benefit that you can run it locally, test it, and debug it in your local development environment without any changes to your usual setup.

AWS Lambda

If you want to be able to achieve the same with lambda, won't be able to simply run it locally any more. At least not without extensive extra setup in your local development environment.

For starters, the "Hello World" service now looks like this:

import { 
  APIGatewayProxyEvent, 
  APIGatewayProxyResult 
} from 'aws-lambda';

export const lambdaHandler = async (
  event: APIGatewayProxyEvent
): Promise<APIGatewayProxyResult> => {
  return {
    statusCode: 200,
    body: `Hello World!`
  }
}

This obviously means that you won't be able to simply start the service and hit it with curl to run a test query.

You will also need to deal with AWS Lambda setup, which I won't get into details with, but suffice to say that it is rather involved.

P6.d03bbaa333c047547fcdd73f9053c32cfc7a57d0-1.gif

To be fair, you can get a local development environment up and running with hot reload for easier development, once you have gone through all of this pain, but to be honest it all really seems rather pointless, when all you really want is to scale the number of instances you are running up on demand, and not have to deal with server provisioning and configuration.

On top of all the hurdles, you will have vendor lockin on top of that, so porting your services out of AWS and into Google Cloud or other competing services will be a significant pain.

Alternatives

There are, of course, alternatives out there, which will mitigate issues such as vendor lockin.

Fission

Fission, for instance, has a developer experience which is almost identical to AWS Lambda, but running on top of Kubernetes, to make it vendor agnostic, and to make it relatively easy to run on Docker Desktop if you want to run your setup locally (which is nice).

Knative

Knative, allows you to run a webservice with zero changes to the actual service, so you get all the benefits of easy local development, with something that is actually still just a web server, along with all the benefits of serverless.

The configuration needed to deploy your webserver is relatively simple (once you dockerize your webservice and push it to docker hub):

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
 name: helloworld
 namespace: default
spec:
 template:
  spec:
   containers:
    - image: my-user/hello-world
      env:
        - name: TARGET
          value: "Hello World!"

Caveats

These alternatives do come with their own sets of configuration hurdles which you need to take care of if you want to customize your endpoint or your load balancer configuration (e.g. if you want to know the client IP behind a load balancer, then you will most likely need to set up proxy protocol).

Kubernetes

These alternatives do require that you have a Kubernetes setup running with your hosting provider (which is usually relatively simple to get). As long as you are already setting up Kubernetes then you might as well do the slight extra step and get the full flexibility of specifying your Ingress and endpoint configuration yourself. It's really not a lot of extra work, with a lot of extra gained value:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-project
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-project
  template:
    metadata:
      labels:
        app: my-project
    spec:
      containers:
      - name: my-project
        image: my-user/my-project
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: my-service
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
  - hosts:
    - my-hostname.com
    secretName: my-tls
  rules:
  - host: my-hostname.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-service
            port:
              number: 80

Conclusion

Why would I want to deal with the added complexity of Kubernetes, when I can "just" use lambdas?

It turns out that defining the services and describing an image is a one-shot kind of endeavour, whereas running a service locally happens every time you update a line of code and re-run your tests.

I would much rather optimise for ease of running curl or using Postman as opposed to needing special client software to test my software just because it needs some esoteric environment to be spun up.

It is really all about optimising for minimising developer time, rather than optimising for making it easy for the computers.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK