How-To Guide

Query Logs with Loki

Goal: Find and analyze application and system logs using Loki and LogQL query language.

Time: ~15 minutes

Prerequisites

  • Access to Grafana UI

  • Basic understanding of log formats

  • Application deployed and generating logs

Quick reference

Task

LogQL Query

All logs from namespace

{namespace="my-app"}

Logs from specific pod

{namespace="my-app", pod="my-pod-xxx"}

Search for text

{namespace="my-app"} |= "error"

Exclude text

{namespace="my-app"} != "healthcheck"

Regex search

{namespace="my-app"} |~ "error|fail"

Rate of errors

rate({namespace="my-app"} |= "error" [5m])

Count by pod

sum by (pod) (count_over_time({namespace="my-app"}[1h]))

Step 1: Access Grafana Explore

  1. Open Grafana: https://grafana.ops.kup6s.net

  2. Login with admin credentials

  3. Click Explore (compass icon) in left sidebar

  4. Select Loki from data source dropdown (top)

Step 2: Basic LogQL queries

View all logs from a namespace

{namespace="hello-kup6s"}

Click Run query or press Shift+Enter.

Result: All log lines from pods in that namespace.

Filter by pod

{namespace="hello-kup6s", pod=~"hello-kup6s-.*"}

The =~ operator matches regex patterns.

Filter by container

{namespace="hello-kup6s", container="hello"}

Useful when pods have multiple containers.

Use label browser

Click Label browser button to see available labels instead of guessing.

Step 3: Text search and filtering

Search for specific text

{namespace="hello-kup6s"} |= "GET"

The |= operator finds logs containing “GET”.

Exclude unwanted logs

{namespace="hello-kup6s"} != "healthcheck"

Removes health check noise.

Chain multiple filters

{namespace="hello-kup6s"}
  |= "GET"
  != "/health"
  |= "200"

Finds GET requests, excludes health checks, only shows 200 responses.

Regex patterns

{namespace="hello-kup6s"} |~ "error|fail|timeout"

Matches logs containing any of these words.

Step 4: Parse structured logs

JSON logs

If your app logs JSON:

{namespace="hello-kup6s"}
  | json
  | level = "error"

After | json, you can filter on JSON fields.

Extract specific fields

{namespace="hello-kup6s"}
  | json
  | line_format "{{.timestamp}} [{{.level}}] {{.message}}"

Reformats output to show only specific fields.

Logfmt parsing

For key=value format logs:

{namespace="hello-kup6s"}
  | logfmt
  | status >= 400

Pattern parsing

For custom formats:

{namespace="nginx"}
  | pattern `<ip> - - <_> "<method> <uri> <_>" <status> <size>`
  | status >= 400

Extracts fields from unstructured logs.

Step 5: Metrics from logs

Count log lines

count_over_time({namespace="hello-kup6s"}[5m])

How many log lines in last 5 minutes?

Rate of log lines

rate({namespace="hello-kup6s"}[5m])

Logs per second.

Error rate

sum(rate({namespace="hello-kup6s"} |= "error" [5m]))

Errors per second across all pods.

Top error sources

topk(5,
  sum by (pod) (
    count_over_time({namespace="hello-kup6s"} |= "error" [1h])
  )
)

Top 5 pods generating errors in last hour.

HTTP status code distribution

sum by (status) (
  count_over_time(
    {namespace="nginx"}
      | json
      | __error__ = ""
    [5m]
  )
)

Count of each HTTP status code.

Step 6: Time range queries

Last 5 minutes

Use the time picker (top right): Last 5 minutes

Or in query:

{namespace="hello-kup6s"}[5m]

Specific time window

{namespace="hello-kup6s"}[2025-10-21T10:00:00Z:2025-10-21T11:00:00Z]

Offset to compare

{namespace="hello-kup6s"} offset 1h

Shows logs from 1 hour ago (useful for comparison).

Step 7: Advanced queries

Logs around an event

Find logs 5 minutes before/after a specific timestamp:

  1. Find the event

  2. Note the timestamp

  3. Click the log line → Show context

Or query:

{namespace="hello-kup6s"}
  @ 1634567890  # Unix timestamp
  [10m]

Compare error rates

sum(rate({namespace="hello-kup6s"} |= "error" [5m]))
/
sum(rate({namespace="hello-kup6s"}[5m]))

Error rate as percentage of all logs.

Correlate with metrics

In Explore:

  1. Add second query (click + Add query)

  2. Switch to Prometheus

  3. Query:

    rate(container_cpu_usage_seconds_total{namespace="hello-kup6s"}[5m])
    

Now see logs and CPU usage side-by-side!

Step 8: Common use cases

Debug application crashes

{namespace="hello-kup6s"}
  |= "panic"
  OR |= "fatal"
  OR |= "crash"
  OR |= "killed"

Find slow requests

{namespace="hello-kup6s"}
  | json
  | duration > 1000

Finds requests taking > 1000ms.

Track deployment issues

{namespace="hello-kup6s"}
  |= "error"
  AND time > now() - 5m

Errors in last 5 minutes (right after deployment).

Security audit

{namespace="hello-kup6s"}
  |= "401"
  OR |= "403"
  OR |= "unauthorized"

Failed authentication attempts.

Resource exhaustion

{namespace="hello-kup6s"}
  |~ "OOM|out of memory|memory limit"

Memory issues.

Step 9: Save and share queries

Create a dashboard

  1. Click Add to dashboard (top right)

  2. Choose existing dashboard or create new

  3. Configure panel:

    • Title: “Application Errors”

    • Visualization: Logs or Time series

  4. Click Save

Create alerts from queries

  1. Switch to Alert tab

  2. Use your LogQL query as alert condition

  3. Set threshold (e.g., error rate > 10/min)

  4. Configure notifications

Step 10: Performance tips

Use narrow time ranges

Instead of:

{namespace="hello-kup6s"}  # queries ALL time

Use:

{namespace="hello-kup6s"}[5m]  # last 5 minutes only

Filter early

Bad (slow):

{namespace="hello-kup6s"} | json | level = "error"

Better (fast):

{namespace="hello-kup6s"} |= "error" | json | level = "error"

Filter with |= before parsing!

Use specific labels

Bad (queries all namespaces):

{pod="my-pod-xxx"}

Better (queries one namespace):

{namespace="hello-kup6s", pod="my-pod-xxx"}

Limit results

Add | limit 100 to queries returning lots of results:

{namespace="hello-kup6s"} | limit 100

Troubleshooting

“No data” result

Check:

  1. Is the namespace correct? Use label browser

  2. Is the time range right? Logs might be older/newer

  3. Are logs actually being shipped? Check pod logs:

    kubectl logs -n hello-kup6s my-pod-xxx
    

Query timeout

Query is too broad. Narrow the time range or add more filters:

{namespace="hello-kup6s", pod=~"specific-pod.*"}[5m]
  |= "keyword"

“Parse error”

Check LogQL syntax. Common mistakes:

  • Missing quotes: {namespace=hello}{namespace="hello"}

  • Wrong operator: {namespace=="hello"}{namespace="hello"}

  • Unclosed braces: {namespace="hello"{namespace="hello"}

Labels not available

Label might not be indexed. Check available labels:

{namespace="hello-kup6s"}

Then click Labels button to see all available labels.

LogQL cheat sheet

Label matching

Operator

Meaning

Example

=

Equals

{namespace="hello"}

!=

Not equals

{namespace!="kube-system"}

=~

Regex match

{pod=~"hello-.*"}

!~

Regex not match

{pod!~"test-.*"}

Line filters

Operator

Meaning

Example

|=

Contains

|= "error"

!=

Not contains

!= "health"

|~

Regex match

|~ "error|fail"

!~

Regex not match

!~ "debug|trace"

Parsers

Parser

Use for

Example

| json

JSON logs

| json | level="error"

| logfmt

key=value logs

| logfmt | status>=400

| pattern

Custom formats

| pattern "<ip> <_> <status>"

| regexp

Extract with regex

| regexp "(?P<code>\\d{3})"

Aggregations

Function

Purpose

Example

count_over_time

Count lines

count_over_time({...}[5m])

rate

Lines per second

rate({...}[5m])

sum

Total

sum(rate({...}[5m]))

avg

Average

avg(duration)

topk

Top N

topk(5, sum by(pod)(...))

Next steps