User Tools

Site Tools


infoblox:hardware

This is an old revision of the document!


Infoblox Hardware

RMA

Datasheets

Documentation

Shipping Box Size

Shipping box size:

  • NIOS is 69 x 58 x 25.4 (in CM)
  • B1-212 is 35 x 24 x 9 (in CM)

Network

NIOS Network Interfaces

(Also true for Infoblox NIOS VM)

  • Interface1 - MGMT
  • Interface2 - LAN1 (default interface for everything)
  • Interface3 - HA (services run on this interface when HA node owns the VIP)
  • Interface4 - LAN2

NIOS Fibre Ports

TE-906 and TE-805 hardware supports copper only. 10/100/1000M

TE-1506, TE-1606, TE-2306, TE-4306, TE-1405, TE-2205 and TE-4005 hardware support variants that have fibre.

You cannot change format after purchase. You purchase a devices with either Copper OR SFP ports (X5 generation, you have to choose between SFP or SFP+ at purchase. X6 fibre option allows for both SFP and SFP+). The device is then fixed with those ports for the rest of its life. However, if you get a SFP+ box that is part of the X6 series, you can insert SFP optics and copper adapter optics in (this is not true of X5 series where there are separate models for 1G fibre and 10G fibre). If you get a SFP box, you can use copper adapters. You can also mix and match. e.g. if you have a SFP+ box, you can use SFP+ for LAN1 and SFP for HA.

It isn't possible to determine from CLI/UI what SFP transceivers are installed or whether the device uses copper or fibre. It is possible with “show interface” on the EOL PT-xxxx appliances and EOS TE-4030 appliance. Those EOS appliances have the following in the “show interface” output

SFP Type:    Copper
SFP Model:  Finisar(FCLF-8521-3)

Having said that, look at the MAC addresses. The following is an incomplete list of fibre MAC addresses that would indicate a fibre card.

  • 00-90-65 (Finisar Corporation)
  • 00-10-90 (Finisar Corporation)
  • 00-01-2E (Finisar Corporation)
  • 00-30-AE (Finisar Corporation)
  • 00-E0-Ed

Optical transceivers are available. Details are here (all the same) for TE-1506, TE-1606, TE-2306, and TE-4106.

  • IB-SFPPLUS-LR (Infoblox SFP+ Long Range 10GbE LR fiber Transceiver) - FRU, Transceiver, Long Range Fiber 10GE SFPPLUS
  • IB-SFPPLUS-SR (Infoblox SFP+ Short Range 10GbE SR fiber Transceiver) -FRU, Transceiver, Short Range Fiber 10GE SFPPLUS
  • IB-SFP-SX (RoHS 6 Compliant 2Gb/s 850nm SFP SR Transceiver) - FRU, Transceiver, Short Range Fiber Ethernet SFP for IB-SFP-CARD
  • IB-SFP-CO (RoHS 6 Compliant 1000 BASE-T Copper SFP Transceiver ) - FRU, Transceiver, Copper 1000-BASE-T Ethernet SFP for IB-SFP-CARD

Note on “SX” - Cisco 1000BASE-SX SFP is fully covered in the standard IEEE 802.3z and it can only be operated over multimode fiber, such as OM1 and OM2. The letter “S” in the term “SX” stands for short range.

X6 fibre hardware supports (see documentation links above):

  • Finisar FTLF1318P3BTL (Finisar SFP 1GbE LR Fiber Single-Mode Transceiver)
  • Cisco SFP-H10GB-CU5M (Cisco SFP+ 10GbE Copper Direct Attach (10GSFP+Cu) Cable)
  • HPJ9283B (HP SFP+ 10GbE Copper Direct Attach (10GSFP+Cu) Cable)

Disks

NIOS Disks SKU

The following SKUs are for hard drives. Before buying a hard drive, confirm you have the right size by providing support with a tech support bundle and getting them to confirm.

  • T-DISK-SSD1TB - Trinzic X6 FRU, Infoblox SAS Hard Disk Drive (SSD), 1 TB (TE-1506)
  • T-DISK-SSD2TB - Trinzic X6 FRU, Infoblox SAS Hard Disk Drive (SSD), 2 TB (TE-1606, TE-2306)
  • T-DISK-SSD4TB - Trinzic X6 FRU, Infoblox SAS Hard Disk Drive (SSD), 4 TB (TE-4306)
  • T-DISK-HDD1800 - FRU, Trinzic 1405, 2205, and 4005 Series SAS Hard Disk Drive (Spindle), 1.8 TB
  • T-DISK-HDD1200 - FRU, Trinzic 1405 & 2205 Series SAS Hard Disk Drive (Spindle), 1.2 TB
  • T-DISK-HDD900 - FRU, Trinzic 1405 & 2205 Series SAS Hard Disk Drive (Spindle), 900 GB

NIOS Disk Drives for X5

The Trinzic TE-1415 and TE-1425 appliances, and Advanced Appliance PT-1405 each provide one (1) Infoblox hard disk storage device.

The Trinzic Reporting TR-1405 appliance, and Network Insight ND-1405 appliance provides two (2) hard disks in a RAID 1 array.

We can use the hard drives in the SKU intended for the 1405 appliance to replace the existing drives of 1405 appliance. The hard disk drives are located behind the drive bay doors and the appliance cover needs to be removed for the hard drive replacement.

Power

Power Cables

  • C5 is the clover leaf. Infoblox does not sell cables with this connector.
  • C7 is the dual pin (clover leaf with a leaf missing). Infoblox does not sell cables with this connector.
  • C13 is standard Kettle lead (female)
  • C14 is the male equivalent of the kettle lead (e.g. for plugging into UPS). Infoblox does not sell cables with this connector.
  • Power BS 1363 is UK standard plug with all pins partly insulated. (also known as 'Type G').

Infoblox does not sell or supply C13-C14 cables. These cables typically used to connect appliances to UPS strips.

Power Sockets around the world and images of plug types.

This page provides the electrical appliance connector types

  • Plug Type B - Power cord for North America with IEC-60320 C13 and NEMA 5-15P cord ends
  • Plug Type I - Power cord for Australia with IEC-60320 C13 and AS/NZS 4417 cord ends
  • Plug Type G - Power cord for United Kingdom with IEC-60320 C13 and BS 1363 UK13 cord ends
  • Plug Type E - Power cord for Continental Europe with IEC-60320 C13 and CEE 7/7 SCHUKO cord ends
  • Plug Type F - Power cord for Continental Europe with IEC-60320 C13 and CEE 7/7 SCHUKO cord ends

B105 and B212 comes with a US Power cord by default. If country specific cord is added (e.g. UK), you end up with two power cables, the default US cord and the country specific cord.

With X5 and X6 hardware, you only get one cord and it is set at time of sale (e.g. US, UK, etc)

Power Cable SKU

Cables offered by Infoblox. All are C13 connector (kettle lead) at one end and country specific at the other end.

SKU Description Gauge Applies to
IB-POWER-CORD Power Cord - Group A 18 Gauge (10 amps) X5 hardware & B1-212
IB-POWER-CORD-14G Power Cord - Group B, 14 Gauge 14 Gauge (15 amps) X6 hardware (and X5)
IB-POWER-CORD-B1 Cable, AC Power Cord, BloxOne 18 Gauge (10 amps) B1-105

NOTE: On quotes, the country type will be added to the end of the SKU. e.g.

  • IB-POWER-CORD-INDIA
  • IB-POWER-CORD-14G-UK

B1-105 has its own SKU due to supply channel sources.

NIOS PSU SKU

  • T-PSU600-AC FRU, Trinzic Series AC Power Supply Unit, 600W
  • T-PSU600-DC FRU, Trinzic Series DC Power Supply Unit, 600W

These are suitable for TE-1405 and TE-2205, TE-906, TE-1506, TE-1606, TE-2306 and TE-4106 hardware.

NOTE: When ordering the TE-1506 hardware with dual PSU, the TE-1506 will ship with one PSU and the second PSU will be shipped in a separate package. The second PSU will need to be unboxed and installed into the TE-1506 chassis by the end user.

NIOS PSU Power Draw

PSU Rated limits can be found in the datasheet.

Hardware Typical Draw Max Draw PSU Max Documentation
TE-906 250 W 300 W 400W TE-906
TE-906-2AC 250 W 300 W 600W TE-906
TE-1506 375 W 475 W 600W TE-1506
TE-1606 375 W 475 W 600W TE-1606
TE-2306 450 W 500 W 600W TE-2306
TE-4106 450 W 500 W 600W TE-4106

Racking Hardware

NIOS Air Flow

Air flow through Trinzic hardware is “Front to back”.

As per docs

“Ensure that you install the appliance in an environment that allows open air to the front and back of the appliance. Do not obstruct the appliance or block air flow going from the front to the back of the appliance.”

NIOS Racking Rails

Factory ships all NIOS hardware with the adjustable kit. If fixed kit is needed, that must be purchased separately.

  • T-TRINZIC-RAIL-400-900MM FRU, Infoblox rack rail kit for 4-post racks 600 - 900 mm deep, adjustable
  • T-TRINZIC-RAIL-200-600MM Infoblox Fixed Rack Mount Rail Kit with 2 posts

TE-906, TE-1506 and TE-1606 are all 1U.

TE-2306 and TE-4106 are 2U.

Rack Mounting Safety Requirements

The Infoblox appliance draws air in through the front of the chassis and expels air through the rear. Adequate ventilation is required to allow ambient room air to enter the system chassis and to be expelled from the rear of the chassis.

The following space and airflow requirements are required for the Infoblox TE-906 system operation:

  • Minimum clearance of 63.5 cm (25 in) in front of the rack
  • Minimum clearance of 76.2cm (30 in) in the rear of the rack
  • Minimum clearance of 121.9 cm (48 in) from the back of the rack to the back of another rack or row of racks

NIOS-X

NIOS-X Performance

Recommended for Small Branches Medium Branches Large Branches
DHCP Leases
Per Second (LPS)*
25 300 400
DNS Queries
Per Second (QPS)*
160 @ 0% CHR
681 @ 85% CHR
1.6K @ 100% CHR
700 @ 0% CHR
2.9K @ 85% CHR
7K @ 100% CHR
2.4K @ 0% CHR
6.8K @ 85% CHR
8.8K @ 100% CHR

*The stated performance numbers are for reference only. They represent the results of lab testing in a controlled environment focused on individual protocol services. Enabling additional protocols, services, cache hit ratio for recursive DNS, and customer environment variables will affect performance. To design and size a solution for a production environment, please contact your local Infoblox Systems Engineer.

B105

Also known as B1-105.

Single PSU only. Power Brick takes standard kettle lead (C13).

Infoblox announced that the EOS date for B105 is 31 Oct 2024. B105 only comes with 1 year warranty so that means all B105 will be out-of-hardware support on 31 Oct 2025.

B212

The B212 (a.k.a B1-212) is a Dell VEP 1425 with an Infoblox logo and 1 year warranty from Infoblox. From January 2025 it is possible to purchase renewable maintenance contracts for B212 hardware. It does include a TPM module for possible future use and the “standard” VEP 1425 from Dell does not include this.

Single PSU only. Power Brick takes standard kettle lead (C13)

If you need more than 1 year hardware warranty, you need to by a Dell VEP 1425 directly from Dell.

Serial console works with Dell VEP 1425.

Dell EMC Networking Versa ready VEP1425, 4 core, 8GB RAM, 16GeMMC, 120GSSD

Optional “Dual-Unit Tray” - VEP1405 supports desktop placement (rubber feet), wall mount (using brackets), and rack mount using the optional dual-unit tray (data sheet)

Dell VEP

Infoblox supports the two following Dell VEP servers for NIOS-X

  • Dell VEP-1425
  • Dell VEP-1485
  • Infoblox Deployment Guide for Dell VEP Dell VEP here.
  • Dell VEP webpage here
  • Dell Spec Sheet PDF here
  • Dell 1405 Platform Release Notes here
  • Dell VEP Chassis Details here

VEP1425 (210-AREH) (Has WiFi and Bluetooth. Avoid for Infoblox)

  • 4-core
  • 8 GB DDR4
  • 16 GB eMMC
  • 120 GB SSD
  • 2x 2 Wi-Fi
  • 6x 1 GB Copper RJ45
  • 2x 10 GB SFP+
  • Versa software installed
  • Trusted Platform Module (TPM) 2.0 - World-wide except China
  • 2x USB 3.0
  • Low-energy Bluetooth (BLE)
  • One fan

VEP1425N (210-BCBE)

  • 4-core
  • 8 GB DDR4
  • 16 GB eMMC
  • 120 GB SSD
  • 6x 1 GB Copper RJ45
  • 2x 10 GB SFP+
  • Versa software installed
  • Trusted Platform Module (TPM) 2.0 - World-wide except China
  • 2x USB 3.0
  • One fan

VEP1485 (210-AWIQ) (Has WiFi and Bluetooth but also 64GB RAM and extra 2TB SSD)

  • 16-core
  • 64 GB DDR4
  • 16 GB eMMC
  • 2 TB SSD
  • 2x 2 Wi-Fi
  • 6x 1 GB Copper RJ45
  • 2x 10 GB SFP+
  • ADVA software installed
  • Trusted Platform Module (TPM) 2.0 - World-wide except China
  • 2x USB 3.0
  • Low-energy Bluetooth (BLE)
  • Two fans

VEP1485N (210-BCBB)

  • 16-core
  • 32 GB DDR4
  • 16 GB eMMC
  • 240 GB SSD
  • 6x 1 GB Copper RJ45
  • 2x 10 GB SFP+
  • Versa software installed
  • Trusted Platform Module (TPM) 2.0 - World-wide except China
  • 2x USB 3.0
  • Two fans

NIOS Hardware Status

Example output of show hardware_status

hostname > show hardware_status 
CPU_TEMP:  29 C
SYS_TEMP: 27 C
POWER:  Power #1 OK TYPE:AC FRU-ID:PWS-606P-1R SN:P606PCL03VV4222
POWER:  Power #2 OK TYPE:AC FRU-ID:PWS-606P-1R SN:P606PCL03VV4333
FAN1:   7400
FAN2:   7600
FAN3:   7500
FAN4:   7500
FAN5:   7600
FAN6:   7500

RAID_ARRAY: OPTIMAL
RAID_DISK1: ONLINE, IB-Type14
RAID_DISK2: ONLINE, IB-Type14
RAID_DISK3: ONLINE, IB-Type14
RAID_DISK4: ONLINE, IB-Type14
RAID_BATTERY: RAID battery OK

Hardware EOL

Model GA End of Life End of Extended Support Service Life
Trinzic A Q1 2009 Q1 2016 Q4 2017 8 Years
Trinzic X0 Q4 2011 Q1 2021 Q1 2023 11 Years
Trinzic X5 Q3 2016 Q2 2026 Q2 2028 11.5 Years
Trinzic X6 Q3 2023

NIOS Console Cable

Documentation. (In this case, TE-1506 hardware)

Physical NIOS appliances have a male DB-9 console port on its front panel.

  • Bits per second: 9600
  • Data bits: 8
  • Parity: None
  • Stop bits: 1
  • Flow control: Xon/Xoff

HAS TO BE A NULL MODEM CABLE - Null modem, also called crossover, is a term associated with serial (RS-232) cables. A standard serial cable, also called an AT cable, has the wires inside the cable running straight through. Take a DB9 cable as an example. Pin 1 on one end of the cable would be connected to Pin 1 on the other end. Then Pin 2 to 2, 3 to 3, and so on. Null modem cables are serial cables that use an alternative pinout for different functionality.

https://docs.infoblox.com/space/IIGF2SA/36834963/Connecting+to+the+Appliance

Examples of cables/adapters that should work:

Another set of examples are getting USB Hub to connect to NIOS hardware

If buying a DB-9 female to RJ45 female adapter, that can be a “normal” one. The important thing is to get the DB-9 cable (female-female) to be the correct type (see above).

If creating a DB-9_to_RJ45 adapter, the pin out is as follows:

DB9 RJ-45
7 8
4 7
3 6
5 5
5 4
2 3
6 2
8 1
  • Kit Part: 200-0008-100-A
  • Adapter: 216-0015-100A
  • Line Cord: 201-0006-000-A

Hardware vs Virtual

One question many uses ask is “Should I have hardware appliances or virtual appliances?”.

In the majority of cases, Infoblox does not have an interest in whether customers deploy physical or hardware appliances. It is up to the individual customer for their specific circumstances and requirements.

The following list of pros/cons of running hardware is intended simply to provided some consideration points when thinking about your own environment.

Worldwide there are major enterprises that deploy fully in cloud and other fully on-prem; some fully physical and some fully virtual. Often is it actually a combination of physical and virtual.

Advantages of Hardware

  • Critical DNS/DHCP service won't be affected by Hypervisor performance/connectivity issues and doesn't take up hypervisor compute. (NIOS appliances require a lot of CPU/RAM)
  • Hardware appliances won't be taken offline if a hypervisor gets hit by ransomware (see what hackers did to the MGM casino for the impact of Hypervisor ransomware in general. There has been at least one case of a customer with a mixture of physical and virtual Infoblox appliances getting hit by ransomware on their hypervisor. Their virtual appliances (including virtual NIOS appliances) went offline but physical Infoblox appliances kept running. Another customer moved to Infoblox after their Microsoft DNS/DHCP went offline during a ransomware attack. The lack of DHCP meant that the IT staff responding couldn't actually get on the network for a long time.)
  • Hardware means the DNS/DHCP service is not impacted by storage of hypervisor failure.
  • Infoblox team at customer (Networks/Infrastructure/Platform/Unix/etc) don't require access to Hypervisor admin console in order to access console of appliances. In some companies, the Infoblox team are given all appropriate access to the hypervisor infrastructure. In other companies, the Infoblox team have not access ot the hypervisor infrastructure.
  • If NIOS is being deployed in an environment where the availability/support status of the hypervisor is in question, hardware can be preferable. e.g. maybe in a large lab environment where the state of the hypervisor system has questionable uptime, running NIOS on hardware frees you from associated service interruption when the VM's are impacted.
  • Infoblox team at customer had direct access to the appliances independently of other IT teams (hypervisor, storage,etc). This can often be useful where the Infoblox is run by a third party partner organization to the end user and runs it in their own DC.
  • Infoblox team at customer can RMA a faulty appliance rather than engage the internal hypervisor team to redeploy the VM. This might sound strange but in some very large enterprise environments where multiple third parties are responsible for systems liken networks, security, storage, etc, something as simple as a VM rebuild can be tedious.
  • Avoids a chicken/egg situation were a crashed storage systems needs DNS in order to boot but the DNS server is a VM that is offline because the storage system crashed (true story - this caused one Infoblox customer to revert their migration to a fully virtual Infoblox deployment all the way back to a physical deployment). In another case when the storage systems came back online, VM's couldn't start because VCenter couldn't resolve the hosts. The main question is this, “If I power off ALL my equipment in the DC, can I power it all back on or will the fact that the DNS server (and possibly DHCP server) is virtualization prevent the hypervisor system (looking at you VMware) from booting?”. ESXi & VCenter (Vcloud director) needs to do forward and reverse lookups to boot. If you are going to virtualize everything, make sure you can boot everything from a full “power off” globally.
  • Allows services to be run on segments of the network that security will not permit VM's to be run. For example, some organizations will not allow guest DNS/DHCP to be run on VM's that are running inside the corporate VM environment.
  • Allows services to be run where there is no hypervisor capacity. For example, a large but remote office that has not VM capacity. You can deploy hardware appliances or deploy a Hypervisor stack and then virtual Infoblox's. This second option would give you VM's but also extra infrastructure to manage and more software to patch and keep secure.
  • The end user's hypervisor of choice may not be supported by Infoblox.
  • The end user may have an architectural requirement that networks services by Infoblox for DNS/DHCP/NTP/etc do not access the virtual network. (e.g. common one is for guest or DMZ services to be hardware while production internal Infoblox appliances are virtualization on production hypervisor.)
  • Some deployment types are better with hardware. e.g. External DNS server plugged directly into the Internet outside of the firewall with ADP (Advanced DNS Protection) enabled. This allows attacks against DNS to not flood out the firewall. However, you are unlikely to want to allow external, public network traffic through to your production hypervisor.
  • Allows services to be run when the existing Hypervisor (or planned future Hypervisor) is not supported by Infoblox for vNIOS deployment. It also means that the Hypervisor team can upgrade the version of Hypervisor without waiting for Infoblox so announce official support for the new version of Hypervisor code (e.g. latest versions of VMware ESXi, Hyper-V, KVM, etc).

Challenges with Hardware

As a side note; a common time for PSU to fail is at reboot of appliance due to temporary surge of power at boot.

  • Hardware is an additional capital expenditure. In addition to the capital cost of the hardware itself, there is also the operational cost of the hardware support contact (separate to the software subscription).
  • If a failure of virtual appliance, it can be much easier to get console access via the hypervisor than on hardware (depending on whether console access has been setup on hardware. This also assumes Infoblox team have been given access to hypervisor). This also means actions like forcing a reboot, etc can be much easier. With hardware on remote sites, getting console access can be difficult. Especially in high security data centers when the Infoblox team themselves may not be permitted access and may have to arrange for authorized “local hands” to access the hardware unit and be talked through getting console access.
  • If an appliance failure occurs, a VM can just be redeployed (assuming a well functioning hypervisor team). Hardware needs to be RMA'd. This can be really easy but can also be time consuming and frustrating depending on the environment. The larger the enterprise and the more global the network, the more issues you can run into. RMA means
    1. Support will require end user to verify need for RMA using console connection. Depending on the environment (think highly secure remote DC), this can present a challenge to the customer. (This is normal across most vendors).
    2. Once Support have authorized the RMA, the depot must process it and this is where “Next Business Day” can frustrate the customer. N
    3. BD means next business day after the depot has shipped the hardware. However, in order to ship hardware, Depot must receive the RMA order before 15:00 local time and depot don't work public holidays, etc.
    4. Before the RMA order goes to depot, support must run some verification that the appliance is dead which requires some local diagnostic. Problem is, if the costumer has an environment that makes it difficult to get this data, the RMA order may be delays.
    5. For example, customer does a reboot on Friday night and the box doesn't come back, customer can't get someone on sight properly until Tuesday because of a bank holiday Monday. Once local hands help support show RMA is needed, support issue RMA but after 15:00 local time. RMA gets picked up on Wed for a Thursday delivery. Thus the box failed on Friday and RMA arrived the following Thursday (while falling inside the “NBD” t&c)
    6. In very rare cases, an RMA may be delayed because of an administrative issue (e.g. shipping paperwork, trade regulation changes, etc)
    7. Finally, customer may not provide full data to Infoblox around shipping numbers to quote when hardware arrives at data center. This can result in the RMA being rejected and requiring a reschedule of the appliance.
  • TE-906 doesn't have RAID. TE-1506 doesn't have RAID. TE-1606 has RAID-1. TE-2306 and TE-4106 have RAID-10. All models support dual PSU. TE-906 must have dual PSU specified at purchase and the TE-1506 must have the second PSU added during or after purchase.
  • B1-212 hardware has single SSD and single PSU.
  • Capital expense of hardware and on going support cost (operational cost) of hardware in addition to the cost of the NIOS software. Further more, with a virtual NIOS appliance (e.g. TE-926), if you want to upgrade you can just pay the difference between that and the higher model you want (e.g. TE-1516). However, if you have hardware, not only do you need to pay the delta, you must also buy new hardware (e.g. TE-1506 hardware for TE-1516 software).
  • Hardware issues are now the problem of the Infoblox team rather than the hypervisor/storage team.
  • Host of power to run and cool the hardware as well as the space the devices use up in the rack. This becomes more of an issue at smaller sites (e.g. offices rather than DC).
  • Moving hardware is much more involved that vMotion or redeploy of a VM. VM deployment can be scripted.
  • Initial deployment (and eventual hardware refresh) can involve a lot of shipping and also has the challenge of how you prep the boxes. e.g. if you have 50 sites to deploy to (true story), how do you get all the devices prepped with correct network configuration and software, etc to allow the local hands team to just rack, cable and power on without having to do any other config work? Not such an issue with smaller environments.
  • Occasionally NIOS (or any system in general) may hang on reboot. If this happens, a physical unplug/plug action is needed on the power cables. This can be challenging if the hardware is in a remote data center and local hands are not immediately available. (also a reason to use HA when using hardware). When using a VM, you don't have this problem (assuming you have access to the hypervisor console).
  • Some end users have a policy of not allowing any VM to be deployed if it can't have a baseline of software installed. Infoblox appliances can't have extra software installed so this means those customer's have to deploy on hardware.

Other Thoughts

For HA pairs, do NOT mix physical and virtual. The only reason it is supported is to allow a physical HA pair to be migrated to a virtual HA pair (or vice-versa).

Don't forget the hybrid approach. A frequent concern is service availability if the hypervisor/storage is impacted. Quite often customers will consider visualizing the non-service members of a Grid such as reporting and network discovery. The GM/GMC could also be virtualized as they shouldn't be/don't serve DNS/DHCP. You could also consider physical appliances for one DC and virtual for another, etc.

If you go for hardware for GM/GMC, it is a good idea for GM to be HA. Why? If you try and promote the GMC and it doesn't come back, you are now without back if GM goes down until RMA completes. If you try and upgrade GM and something goes wrong (e.g. hardware failure), a HA GM means you still have a UI to log into and troubleshoot. if the GM isn't HA, you would have to promote the GMC and that will reboot all members of the Grid (with an impact to service).

infoblox/hardware.1758045249.txt.gz · Last modified: by bstafford