VMware vSphere 6.5 Configuration Maximums

VMware vSphere 6.5 Configuration Maximums

[featured_image]
Скачать
Download is available until [expire_date]
  • Версия
  • Скачать 9
  • Размер файла 293.55 KB
  • Количество файлов 1
  • Дата создания 28.10.2017
  • Последнее обновление 26.04.2018

VMware vSphere 6.5 Configuration Maximums

Compute Maximums

The ESXi host compute maximums represents the limits for host CPU, virtual machine, and fault tolerance.
Table 3‑1. Compute Maximums

Item Maximum
Host CPU maximums

Logical CPUs per host 576
NUMA Nodes per host 16

Virtual machine maximums

Virtual machines per host 1024
Virtual CPUs per host 4096
Virtual CPUs per core 32
The achievable number of vCPUs per core depends on the workload and specifics
of the hardware. For more information, see the latest version of Performance Best
Practices for VMware vSphere.

Fault Tolerance maximums

Virtual disks 16
Disk size 2 TB
Virtual CPUs per virtual machine 4
RAM per FT VM 64 GB

Virtual machines per host 4
Virtual CPU per host 8

Memory Maximums

The ESXi host maximums represents the limits for ESXi host memory.
Table 3‑2. ESXi Host Memory Maximums

Item Maximum
RAM per host 12 TB
12 TB is supported on specific OEM certified platform. See VMware Hardware
Compatibility Limits for guidance on the platforms that support vSphere 6.0 with
12 TB of physical memory.
Number of swap files 1 per virtual machine

Storage Maximums

The ESXi host storage maximums represents the limits for virtual disks, iSCSI physical, NAS, Fibre Channel,
FCoE, Common VMFS, VMFS5, and VMFS6.
Table 3‑3. Storage Maximums
Item Maximum
Virtual Disks
Virtual Disks per Host 2048
iSCSI Physical
LUNs per server 512
Cavium (Qlogic) 1 Gb iSCSI HBA initiator
ports per server
4
Cavium (Qlogic) 10 Gb iSCSI HBA initiator
ports per server
4
NICs that can be associated or port bound
with the software iSCSI stack per server
8
Number of total paths on a server 2048
Number of paths to a LUN (software iSCSI
and hardware iSCSI)
8
Cavium (Qlogic) 1 Gb iSCSI HBA targets
per adapter port
64
Cavium (Qlogic)10 Gb iSCSI HBA targets
per adapter port
128
Software iSCSI targets 256
The sum of static targets (manually assigned IP addresses) and dynamic targets
(IP addresses assigned to discovered targets) may not exceed this number.
NAS
NFS mounts per host 256
Fibre Channel
LUNs per host 512

LUN size 64 TB
LUN ID 0 to 16383
Number of paths to a LUN 32
Number of total paths on a server 2048
Number of HBAs of any type 8
HBA ports 16
Targets per HBA 256
FCoE
Software FCoE adapters 4
Common VMFS
Volume size 64 TB
For VMFS3 volumes with 1 MB block size, the maximum volume size is 50 TB.
Volumes per host 512
Hosts per volume 64
Powered on virtual machines per VMFS
volume
2048
Concurrent vMotion operations per VMFS
volume
128
VMFS3
Raw device mapping size (virtual and
physical)
2 TB minus 512 bytes
Block size 8 MB
File size (1 MB block size) 256 GB
File size (2 MB block size) 512 GB
File size (4 MB block size) 1 TB
File size (8 MB block size) 2 TB minus 512 bytes
Files per volume Approximately 30,720
VMFS5 / VMFS-6
Raw Device Mapping size (virtual
compatibility)
62 TB
Raw Device Mapping size (physical
compatibility)
64 TB
Block size 1 MB
1MB is the default block size. Upgraded VMFS5 volumes inherit the VMFS3
block size value.
File size 62 TB
Files per volume Approximately 130,690

Networking Maximums

Networking maximums represent achievable maximum configuration limits in networking environments
where no other more restrictive limits apply (for example, vCenter Server limits, the limits imposed by
features such as HA or DRS, and other configurations that might impose restrictions must be considered
when deploying large scale systems).
Note For all NIC devices that are not listed in the table below, the maximum number of ports supported is
2.
Table 3‑4. Networking Maximums
Item Maximum
Physical NICs
igbn 1 Gb Ethernet ports (Intel) 16
ntg3 1 Gb Ethernet ports (Broadcom) 32
bnx2 1 Gb Ethernet ports (QLogic) 16
elxnet 10 Gb Ethernet ports (Emulex) 8
ixgbe 10 Gb Ethernet ports (Intel) 16
bnx2x 10 Gb Ethernet ports (QLogic) 8
Infiniband ports (refer to VMware
Community Support)
N/A
Mellanox Technologies InfiniBand HCA device drivers are available directly from
Mellanox Technologies. Go to the Mellanox Web site for information about
support status of InfiniBand HCAs with ESXi. http://www.mellanox.com .
Combination of 10 Gb and 1 Gb ethernet
ports
Sixteen 10 GB and four 1 GB ports
nmlx4_en 40 Gb Ethernet Ports (Mellanox) 4
nmlx5_core 25 Gb Ethernet Ports (Mellanox) 4
nmlx5_core 50 Gb Ethernet Ports (Mellanox) 4
nmlx5_core 100 Gb Ethernet Ports
(Mellanox)
4
i40en 10 Gb Ethernet Ports (Intel) 8
i40en 40 Gb Ethernet Ports (Intel) 4
qedentv 25 Gb Ethernet Port (Qlogic) 4
qedentv 50 Gb Ethernet Port (Qlogic) 4
qedentv 100 Gb Ethernet Port (Qlogic) 2
VMDirectPath limits
VMDirectPath PCI/PCIe devices per host 8
A virtual machine can support 6 devices, if 2 of them are Teradici devices.
SRIOV

host
1024
SR-IOV supports up to 43 virtual functions on supported Intel NICs and up to
64 virtual functions on supported Emulex NICs. The actual number of virtual
functions available for passthrough depends on the number of interrupt vectors
required by each of them and on the hardware configuration of the host. Each
ESXi host has a limited number of interrupt vectors. When the host boots, devices
on the host such as storage controllers, physical network adapters, and USB
controllers consume a subset of the total number of vectors. Depending upon the
number of vectors these devices consume, the maximum number of potentially
supported VFs could be reduced.
SR-IOV Number of 10 G pNICs per host 8
VMDirectPath PCI/PCIe devices per virtual
machine
4
vSphere Standard and Distributed Switch
Total virtual network switch ports per host
(VDS and VSS ports)
4096
Maximum active ports per host (VDS and
VSS)
1016
Virtual network switch creation ports per
standard switch
4088
Port groups per standard switch 512
Static/Dynamic port groups per distributed
switch
10,000
Ephemeral port groups per distributed
switch
1016
Ports per distributed switch 60,000
Distributed virtual network switch ports per
vCenter
60,000
Static/dynamic port groups per vCenter 10,000
Ephemeral port groups per vCenter 1016
Distributed switches per vCenter 128
Distributed switches per host 16
VSS portgroups per host 1000
LACP - LAGs per host 64
LACP - uplink ports per LAG (Team) 32
Hosts per distributed switch 2000
NIOC resource pools per vDS 64