Skip to content

Service metrics

Every Knative Service has a proxy container that proxies the connections to the application container. A number of metrics are reported for the queue proxy performance.

Using the following metrics, you can measure if requests are queued at the proxy side (need for backpressure) and what is the actual delay in serving requests at the application side.

Queue proxy metrics

Requests endpoint.

Metric Name Description Type Tags Unit Status
revision_request_count The number of requests that are routed to queue-proxy Counter configuration_name
container_name
namespace_name
pod_name
response_code
response_code_class
revision_name
service_name
Dimensionless Stable
revision_request_latencies The response time in millisecond Histogram configuration_name
container_name
namespace_name
pod_name
response_code
response_code_class
revision_name
service_name
Milliseconds Stable
revision_app_request_count The number of requests that are routed to user-container Counter configuration_name
container_name
namespace_name
pod_name
response_code
response_code_class
revision_name
service_name
Dimensionless Stable
revision_app_request_latencies The response time in millisecond Histogram configuration_name
namespace_name
pod_name
response_code
response_code_class
revision_name
service_name
Milliseconds Stable
revision_queue_depth The current number of items in the serving and waiting queue, or not reported if unlimited concurrency Gauge configuration_name
event-display
container_name
namespace_name
pod_name
response_code_class
revision_name
service_name
Dimensionless Stable
Back to top

We use analytics and cookies to understand site traffic. Information about your use of our site is shared with Google for that purpose. Learn more.

× OK