HP StorageWorks Enterprise Virtual Array Cluster FIO Starter Kit Guía de usuario Pagina 121

  • Descarga
  • Añadir a mis manuales
  • Imprimir
  • Pagina
    / 208
  • Tabla de contenidos
  • SOLUCIÓN DE PROBLEMAS
  • MARCADORES
  • Valorado. / 5. Basado en revisión del cliente
Vista de pagina 120
Setup volumes might be spread across different arrays for additional redundancy, but remember that
all writes are mirrored, and therefore the slowest performing volume will determine when the write is
acknowledged. HP does not recommend placing all three volumes on a single array; if that is all that
is available, create only two setup volumes and use different pools if the array has that option.
Building basic storage pools
Storage pools can be optimized for performance or for capacity; however, there is only one way to
configure storage pools to enable the maximum 2 PB per domain, and another way to deliver maximum
performance, but not both at the same time. This section defines the best practices for building
capacity-optimized storage pools.
Experience with configurations in the field has indicated that pools should be built using at least 816
back-end LUs. Adding more back-end LUs to a pool is allowed, and in some cases may be desirable,
as long as other scalability limits are not exceeded. Less than 16 LUs should also work, but is
discouraged because it limits the capabilities of the system to distribute the I/O across the many paths
to the storage.
A simple concatenated pool should have all of its volumes presented from a single back-end array.
The volumes should have the same RAID type, and similar performance and capacity characteristics.
The pool is constructed of at least as many volumes as there are paths from the DPMs to the array
(16 for an 8-host port array). The benefits and trade-offs of this approach are as follows:
With a concatenated pool, the pool can be expanded by adding one or more additional volumes
of arbitrary size without changing the basic performance characteristics of the virtual disks carved
out of that pool. Best practice would be to add volumes of the same size, or roughly the same
size, as the original volumes.
By having all the back-end volumes on a single array, the availability of the virtual disks carved
from the pool is dependent only on the availability of that single array.
By having all the back-end volumes on a single array, it is relatively straightforward to map per-
formance information from the array to the pool.
By having all the back-end volumes on a single array it is simpler to debug issues.
By having the same RAID type and disk drives for all the volumes of the pool, the performance
characteristics of the pool are derived from the performance characteristics of the disk drives and
RAID type. If these are mixed, it is not possible to predict what RAID type and disk drive will be
used by any front-end virtual disk, and the behavior could be very unpredictableeven between
different LBAs within a single front-end virtual disk.
Occasionally an I/O will span two back-end volumes. This is called a split I/O. Split I/Os are
handled on the DPM soft path. I/Os handled by the soft path do not enjoy the lowest latency and
highest throughput achieved by the DPM fast path. An occasional split I/O will have an impercept-
ible impact. With concatenated pools, there are very few split I/Os because there are only as
many opportunities for split I/Os as there are adjacent back-end volumes.
A single path is used by any one DPM to access each back-end volume. If that path fails, the DPM
will select one of the alternate paths that are available. If a pool were constructed of a single back-
end volume, then a single path will be used from each DPM to the pool. If there are ten virtual
disks using the capacity of that pool, all I/O to those ten virtual disks will be concentrated on that
single path. If there are no other pools on that array, then the resources associated with the addi-
tional ports and controllers are unused. Performance on the single path can suffer with long
latencies and even queue full responses.
By having at least as many back-end volumes in the pool as paths from DPMs to the array there
is the opportunity that all those paths might be used in parallel. The odds of using multiple paths
grows as the number of back-end volumes in the pool increases. For an 8-port EVA like the
EVA8400 that is zoned to a single quad on each of two DPMs, 16 different paths are created
Enterprise Virtual Array Cluster Administrator Guide 121
Vista de pagina 120
1 2 ... 116 117 118 119 120 121 122 123 124 125 126 ... 207 208

Comentarios a estos manuales

Sin comentarios