Storing Grafana Loki log data in a Google Cloud Storage bucket
source link: https://vitobotta.com/2022/12/25/storing-grafana-loki-log-data-in-a-google-cloud-storage-bucket/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Storing Grafana Loki log data in a Google Cloud Storage bucket
Until recently I always configured Loki to use persistent volumes to store log data. It works well, but the problem with persistent volumes is that it's difficult to predict how large volumes you need depending on the amount of logs and the configured retention, so just to simplify things and stop worrying about volume size I decided to switch the storage to a Google Cloud Storage bucket instead.
Since our apps are hosted in Google Kubernetes Engine in the same location, performance is still pretty good and we can store an unlimited amount of logs indefinitely if we want.
In this post I am going to describe how to configure Loki to use a Google Cloud Storage to store log data.
In a terminal, enter the following environment variables so we can remove some duplication in commands we need to run:
ENVIRONMENT=production PROJECT=brella-$ENVIRONMENT BUCKET_NAME=$PROJECT-loki SA_NAME=loki-logging NS=logging SA=${SA_NAME}@${PROJECT}.iam.gserviceaccount.com
Creating the storage bucket
gsutil mb -b on -l eu -p ${PROJECT} gs://${BUCKET_NAME}/
gsutil ls gs://${BUCKET_NAME}/
Creating the service account
To create the service account run:
gcloud iam service-accounts create ${SA_NAME} --project ${PROJECT} --display-name="Service account for Loki" gcloud projects add-iam-policy-binding ${PROJECT} \ --member="serviceAccount:${SA_NAME}@${PROJECT}.iam.gserviceaccount.com" \ --project ${PROJECT} \ --role="roles/storage.objectAdmin"
Deploying Loki
First, we need to create a file containing the configuration options for the Helm chart. So run the following command to create a file named /tmp/loki.yaml or wherever you want with the following:
cat > /tmp/loki.yaml <<EOF fullnameOverride: loki enabled: false distributor: replicas: 3 maxUnavailable: 1 gateway: replicas: 3 maxUnavailable: 1 basicAuth: enabled: false customParams: gcsBucket: loki ingester: replicas: 3 maxUnavailable: 1 persistence: enabled: true querier: replicas: 3 maxUnavailable: 1 persistence: enabled: true serviceAccount: annotations: iam.gke.io/gcp-service-account: ${SA} loki: config: | auth_enabled: false server: http_listen_port: 3100 distributor: ring: kvstore: store: memberlist memberlist: join_members: - {{ include "loki.fullname" . }}-memberlist schema_config: configs: - from: 2020-09-07 store: boltdb-shipper object_store: gcs schema: v11 index: prefix: loki_index_ period: 24h ingester: lifecycler: ring: kvstore: store: memberlist replication_factor: 1 chunk_idle_period: 10m chunk_block_size: 262144 chunk_encoding: snappy chunk_retain_period: 1m max_transfer_retries: 0 wal: dir: /var/loki/wal limits_config: enforce_metric_name: false reject_old_samples: true reject_old_samples_max_age: 168h max_cache_freshness_per_query: 10m retention_period: 2160h split_queries_by_interval: 15m storage_config: gcs: bucket_name: {{ .Values.customParams.gcsBucket }} boltdb_shipper: active_index_directory: /var/loki/boltdb-shipper-active cache_location: /var/loki/boltdb-shipper-cache cache_ttl: 24h shared_store: gcs chunk_store_config: max_look_back_period: 0s table_manager: retention_deletes_enabled: true retention_period: 2160h query_range: align_queries_with_step: true max_retries: 5 cache_results: true results_cache: cache: enable_fifocache: true fifocache: max_size_items: 1024 validity: 24h frontend_worker: frontend_address: loki-query-frontend:9095 frontend: log_queries_longer_than: 5s compress_responses: true tail_proxy_url: http://loki-querier:3100 compactor: shared_store: gcs retention_enabled: true retention_delete_delay: 2h retention_delete_worker_count: 150 compaction_interval: 10m ruler: storage: type: local local: directory: /etc/loki/rules ring: kvstore: store: memberlist rule_path: /tmp/loki/scratch EOF
I just want to highlight one small bit:
serviceAccount: annotations: iam.gke.io/gcp-service-account: ${SA}
Next steps:
- Create the namespace:
kubectl create ns ${NS}
gcloud iam service-accounts add-iam-policy-binding --project ${PROJECT} ${SA} \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:$PROJECT.svc.id.goog[logging/loki]"
helm repo add grafana https://grafana.github.io/helm-charts/ helm repo update
helm upgrade --install loki grafana/loki-distributed \ -f /tmp/loki.yaml \ --set customParams.gcsBucket=${BUCKET_NAME} \ --version 0.67.1 \ --namespace ${NS}
helm upgrade --install \ --namespace ${NS} \ --set "loki.serviceName=loki-query-frontend" \ --set "tolerations[0].operator=Exists,promtail.tolerations[0].effect=NoSchedule,promtail.tolerations[0].key=cloud.google.com/gke-spot" \ --set "config.clients[0].url=http://loki-gateway/loki/api/v1/push" \ promtail grafana/promtail
To confirm that the logs are being stored in the bucket wait for 10-15 minutes then run
gsutil ls gs://${BUCKET_NAME}/
You can also wait for a while to collect some logs, then restart all the Loki components (you'll see a few deployments as well as a couple of StatefulSets and the Promtail DaemonSet) and confirm that you can still search the old logs, to confirm the data is retrieved from the bucket and not from local storage. In my case since it was the first time I set up Loki with a Google bucket I wanted to be 100% sure about this, so I basically tested by completely uninstalling Loki, and ensuring I could still search through old logs after reinstalling Loki again. It works nicely and so far I haven't seen any difference in query speed compared to persistent volumes. Perhaps this might change as the amount of logs grows, dunno yet.
If you already use Loki and have the same problem with persistent volumes that I had, or if you are looking to use Loki as a lightweight log collector in Kubernetes and also use GCP, then I hope this post was useful. Please let me know in the comments if you run into any issues.
Recommend
-
70
README.md
-
7
Grafana, Loki, and Tempo will be relicensed to AGPLv3 Published: 20 Apr 2021 Grafana Labs was founded in 2014 to build a sustainable business around the open source G...
-
5
.NET Core 基于 Grafana Loki 日志初体验 Loki: like Prometheus, but for l...
-
3
V2EX › Linux Grafana 展示 Loki 的一些个问题 FenixVu · 21 小时 41 分钟前 · 372 次点击
-
4
Grafana Loki 简明教程-阳明的博客|Kubernetes|Istio|Prometheus|Python|Golang|云原生 Grafana Loki Loki 是 Grafana Labs 团队最新的开源项目,是一个水平可扩展,高可用性,多租户的日...
-
2
Grafana 日志聚合工具 Loki-阳明的博客|Kubernetes|Istio|Prometheus|Python|Golang|云原生 Loki是 Grafana Labs 团队最新的开源...
-
9
Blog / Engineering New in Grafana Loki 2.5: Faster queries, more log sources, so long S3 rate limits, and more!
-
5
Grafana Loki (https://grafana.com/oss/loki/) looks like a viable alternative to Elasticsearch and has an excellent pedigree, but how does it stack up with Elasticsearch, especially when u...
-
10
创建对应目录mkdir -p /opt/grafana/loki/configmkdir -p /opt/grafana/promtail/logmkdir -p /opt/grafana/confmkdir -p /opt/grafana/datamkdir -p /opt/grafana/log...
-
6
Posted Nov 20, 2021 Updated Feb 22, 2023
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK