LogoLogo
WebsiteSlack
0.30
0.30
  • Get started
  • Clients
    • Install
    • CLI commands
    • Python API
    • Environments
    • Telemetry
    • Uninstall
  • Workloads
    • Realtime APIs
      • Example
      • Predictor
      • Configuration
      • Models
      • Parallelism
      • Server-side batching
      • Autoscaling
      • Statuses
      • Metrics
      • Multi-model
        • Example
        • Configuration
        • Caching
      • Traffic Splitter
        • Example
        • Configuration
      • Troubleshooting
    • Batch APIs
      • Example
      • Predictor
      • Configuration
      • Jobs
      • Statuses
    • Task APIs
      • Example
      • Definition
      • Configuration
      • Jobs
      • Statuses
    • Dependencies
      • Example
      • Python packages
      • System packages
      • Custom images
    • Observability
      • Logging
      • Metrics
  • Clusters
    • AWS
      • Install
      • Update
      • Auth
      • Security
      • Spot instances
      • Networking
        • Custom domain
        • HTTPS (via API Gateway)
        • VPC peering
      • Setting up kubectl
      • Uninstall
    • GCP
      • Install
      • Credentials
      • Setting up kubectl
      • Uninstall
    • Private Docker registry
Powered by GitBook
On this page
  • Cortex logs command
  • Logs on AWS
  • Logs on GCP
  • Structured logging
  1. Workloads
  2. Observability

Logging

PreviousObservabilityNextMetrics

Last updated 4 years ago

Cortex provides a logging solution, out-of-the-box, without the need to configure anything. By default, logs are collected with FluentBit, on every API kind, and are exported to each cloud provider logging solution. It is also possible to view the logs of a single API replica, while developing, through the cortex logs command.

Cortex logs command

The cortex CLI tool provides a command to quickly check the logs for a single API replica while debugging.

To check the logs of an API run one of the following commands:

# RealtimeAPI
cortex logs <api_name>

# BatchAPI or TaskAPI
cortex logs <api_name> <job_id>  # the job needs to be in a running state

Important: this method won't show the logs for all the API replicas and therefore is not a complete logging solution.

Logs on AWS

For AWS clusters, logs will be pushed to using fluent-bit. A log group with the same name as your cluster will be created to store your logs. API logs are tagged with labels to help with log aggregation and filtering.

Below are some sample CloudWatch Log Insight queries:

RealtimeAPI:

fields @timestamp, message
| filter labels.apiName="<INSERT API NAME>"
| filter labels.apiKind="RealtimeAPI"
| sort @timestamp asc
| limit 1000

BatchAPI:

fields @timestamp, message
| filter labels.apiName="<INSERT API NAME>"
| filter labels.jobID="<INSERT JOB ID>"
| filter labels.apiKind="BatchAPI"
| sort @timestamp asc
| limit 1000

TaskAPI:

fields @timestamp, message
| filter labels.apiName="<INSERT API NAME>"
| filter labels.jobID="<INSERT JOB ID>"
| filter labels.apiKind="TaskAPI"
| sort @timestamp asc
| limit 1000

Logs on GCP

Below are some sample Stackdriver queries:

RealtimeAPI:

resource.type="k8s_container"
resource.labels.cluster_name="<INSERT CLUSTER NAME>"
labels.apiKind="RealtimeAPI"
labels.apiName="<INSERT API NAME>"

TaskAPI:

resource.type="k8s_container"
resource.labels.cluster_name="<INSERT CLUSTER NAME>"
labels.apiKind="TaskAPI"
labels.apiName="<INSERT API NAME>"
labels.jobID="<INSERT JOB ID>"

Please make sure to navigate to the project containing your cluster and adjust the time range accordingly before running queries.

Structured logging

You can use Cortex's logger in your Python code to log in JSON, which will enrich your logs with Cortex's metadata, and enable you to add custom metadata to the logs.

See the structured logging docs for each API kind:

Logs will be pushed to using fluent-bit. API logs are tagged with labels to help with log aggregation and filtering.

CloudWatch
StackDriver
TaskAPI
BatchAPI
RealtimeAPI