Posted in:Capacity Planning
Performance management is essential for optimizing enterprise infrastructures. Consider the following:
- Four KPIs to Measure Storage Performance
- Why are Storage Performance Management KPIs Important?
- How Visual Storage Intelligence® Helps Manage Storage Performance
Four KPIs to Measure Storage Performance
Latency, throughput, input/output operations per second (IOPS), and financial metrics are the most important measures storage performance.
The terms latency, IOPS, and throughput are all interconnected. In particular, a storage system with low latency should be able to give a high IOPS performance. For example, a latency of 5 milliseconds (or 1/0.005) corresponds to about 200 IOPS.
With that said, let’s look at each KPI in detail:
Also known as response time, latency is a measure of how quickly a storage system responds to read and write requests.
Latency is an important metric for storage performance management. It quantifies the time it takes to complete a single input/output (I/O) operation. Commonly measured in milliseconds, the fastest flash drives now quote fractions of a millisecond values.
In IT infrastructure, storage is frequently a bottleneck. As a result, any storage system should make every effort to minimize latency. Lower latency means less time waiting for I/O to finish, which means faster execution. When conducting read/write operations on a permanent storage medium, the ideal latency would be zero with an application that is not penalized.
Every I/O activity, however, has some latency owing to data traveling across networks.
Reduced latency means more efficient use of processor and memory. Latency has a direct impact on the speed at which virtual machines and desktops work. This explains the transfer of I/O management to the server and the adoption of solid-state storage in virtual environments.
In fact, the goal of flash-caching hardware and software is to eliminate the need for data to traverse the network, thereby producing exceptionally low latency values.
Latency is a critical indicator in monitoring the state of physical storage resources because of the unpredictable nature of virtual environments. Whether a system has one or one hundred virtual machines, latency remains important.
Subscribe to Our Newsletter!
Also known as bandwidth, throughput refers to the capability of a storage system to transfer a fixed amount of data in a measured.
Typically, throughput is measured in megabytes per second or similar units.
Throughput metrics for storage arrays and disk devices can be monitored in two ways: sustained throughput and peak throughput.
Sustained throughput refers to a device’s or system’s ability to operate at a steady rate over a lengthy period of time. The peak throughput of a system refers to the maximum capacity it can deliver in a short period of time.
Peak throughput levels are critical in virtual desktop infrastructure environments, where boot storms (ie: when a large number of users log in and start up their virtual desktops at the same time) can cause a spike in I/O demand.
This spike results in deficient performance and increased latency if the system is unable to handle the spike effectively.
When managing the dynamic migration of virtual machines (VMs) between datastores in virtual server settings, good throughput numbers are also critical.
When it comes to the capacity to handle a large number of virtual machines, scaling a virtual environment necessitates equivalent throughput capability. Controlling peak demand in virtual desktop infrastructure systems with peak read and write load periods can be a challenge.
Input / output operations per second (IOPS) is a measure of the number of individual read/write requests a storage system can service per second.
This number is remarkably similar to throughput, although it is a little different. A system that can deliver a high number of IOPS with large data chunks will be able to deliver a high throughput, as the value is simply the number of IOPS multiplied by the I/O size.
From the perspective of the host, IOPS is typically used as the standard measure. This provides an abstracted view that is not dependent on the underlying hardware capabilities. This is seen as a measure in both private and cloud virtual infrastructures.
$/GB and $/IOPS
$/GB measures cost per unit of capacity. For years, this was the standard unit to gauge the cost of storage performance. However, new devices and complicating factors such as the way that flash storage works (for which $/GB can be dramatically higher) have opened the door for other helpful measurements.
One of those measurements is $/IOPS, which measures cost of performance instead of cost of capacity.
Both metrics are important for strategizing future storage performance plans, especially when purchasing products and placing data on tiers of storage.
Why are Monitoring Storage Performance Management KPIs Important?
Identifying and recording metrics offers the information needed to evaluate storage performance, but any values obtained must be interpreted in light of I/O profiles and the location of the measurements.
After all, every application generates distinct workload demands.
Because active data is dispersed throughout a datastore or volume holding virtual hard disks, virtual desktop infrastructure and virtual server traffic are highly unpredictable. Virtual desktop infrastructure data is often read-heavy, so low read I/O latency provides a considerable performance benefit.
It is also crucial to select where metrics are recorded from in order to have a complete picture of I/O performance. There is no right or wrong place to collect measurements; each provides insight into the system’s operation.
For example, values collected from the host illustrate how contention at the datastore affects individual guest performance, but values taken from the hypervisor show the storage network’s effectiveness.
How Visual Storage Intelligence® Helps with Performance Management
Visual Storage Intelligence® can help you optimize your performance management and capacity planning.
In fact, you can use Visual Storage Intelligence® to:
- Predict future device capacities & workload performances with and without changes in your environment
- Do capacity planning by group, device, or pool
- Forecast future capacities affected by changing data reduction ratios
- Do capacity planning for VMs and clusters
- Find the sources of file share growth
- And more…
Join us for a live demo – we’ll show you everything Visual Storage Intelligence can do, including how it can help make storage chargebacks / showbacks a reality in your organization.
Seeing is Believing
See Visual Storage Intelligence® in Action