HP Cluster Platform Cabling Tables Papel blanco Pagina 11

  • Descarga
  • Añadir a mis manuales
  • Imprimir
  • Pagina
    / 12
  • Tabla de contenidos
  • MARCADORES
  • Valorado. / 5. Basado en revisión del cliente
Vista de pagina 10
11
Conclusion
You should base your decision to use Ethernet or InfiniBand on performance and cost requirements.
We are committed to supporting both InfiniBand and Ethernet infrastructures. We want to help you
choose the most cost-effective fabric solution for your environment.
InfiniBand is the best choice for HPC clusters requiring scalability from hundreds to thousands of
nodes. While you can apply zero-copy (RDMA) protocols to TCP/IP networks such as Ethernet, RDMA
is a core capability of InfiniBand architecture. Flow control and congestion avoidance are native to
InfiniBand.
InfiniBand also includes support for fat-tree and other mesh topologies that allow simultaneous
connections across multiple links. This lets the InfiniBand fabric scale as you connect more nodes and
links.
Parallel computing applications that involve a high degree of message passing between nodes benefit
significantly from InfiniBand. Data centers worldwide have deployed DDR for years and are quickly
adopting QDR. HP BladeSystem c-Class clusters and similar rack-mounted clusters support IB DDR and
QDR HCAs and switches.
Vista de pagina 10
1 2 ... 6 7 8 9 10 11 12

Comentarios a estos manuales

Sin comentarios