Type: New Feature
Affects Version/s: None
Fix Version/s: None
When we consider monitoring performance of a Cloud, we can broadly classify it to 2 categories
1. Infrastructure/Hardware Monitoring - This involves performance of the various infrastructure components in the cloud like Virtual Machines, Storage, Network etc.
• CPU usage; total – all CPUs, per CPU, and delta between
• Disk usage; total, free, used
• Disk Latency
• Percentage Busy
• Percentage Ready
• Memory; percentage used, swap activity
• Network; bytes in/out
2.Application monitoring - In Calculating Application Performance we cannot go by the resources utilized by the application as in a cloud, applications move around and so the monitoring solution needs to track and map them.
E.g Application Response Time - key metric in Application Performance management which actually calculates the time taken for the application to respond to user requests.
So just like we can detect deviations in application hardware performance we would like to so the same for application : KPIs,response times, request statuses , or order throughput in order to allow us to be proactive with the business impact .
With this application monitoring we can:
- Understand the real-time performance of the cloud services from the end user’s perspective.
- Gain visibility into your workload, even when you do not control the backing infrastructure.
- Isolate problems and drill down to the root cause to immediately take action.
- Define thresholds and create alerts
We believe that TOSCA should recommends a monitoring service spec to be optionally implemented by TOSCA containers and provide a set of monitoring capabilities on the application workloads.
This is a crucial and basic capability of any application lifecycle management orchestrator.
The idea is to simple allow the app developer to express in its service template the desired app KPIs to be collected and doing some dynamic reactions upon certain KPI’s threshold crossing.
The monitoring engine applies the Sample Metric collection on the exposed software component endpoint interface
In the example below you can see a simple db (software component) hosted on a compute, there is Sample Metric collected every minute on this software component, in addition there is an hourly aggregation based on this minutely sampling.
- Metric base type
description: The basic metric type all other TOSCA metric types derive from
- valid_values: [SUM, AVG, MIN, MAX, COUNT]
- A single metric sample
description: A single metric sample,applicatio KPI, like CPU, MEMORY, etc.
- valid_values: [RUNNING, CREATING, STARTING, TERMINATING, ..]
#a sample metric requires an endpoint
- endpoint: tosca.capabilities.Endpoint
#An aggregated metric
description: An aggregated metric
- The time window in millis for aggregating the metric
- greater_than: 0
- basedonmetric: tosca.monitoring.Metric
- a relationship between sample and endpoint
valid_targets: [ tosca.capabilities.Endpoint ]
#this is a relationship to enforce that aggregated metric is based on other sample/aggregate metric
valid_targets: [ alu.capabilities.Monitorable.MetricSample,alu.capabilities.Monitorable.AggregatedMetric ]
- host: server1
#single sample connects to the monitoring endpoint
polling_schedule: 0 0/1 * 1/1 * ? *
- Defines the aggregation that is done over the instances of the tier
#sampling (collecting the metric) is done through the endponint
endpoint: #based on proposal TOSCA-188
#aggregation over the sample, polled hourly
polling_schedule: 0 0 0/1 1/1 * ? *
- Defines the aggregation that is done for the metric over time
basedonmetric: #based on proposal TOSCA-188