Table of Contents

Deploy Virtual Infoblox Appliance

Where possible, deploy all vCPU on same socket. Specifically, if you require more vCPU than you have per physical socket, it is possible that NUMA might cause performance to be sub-optimal. Specifically, it may cause issues if you are using ADP or DCA. If you are not using ADP or DCA, NUMA=1 is not needed as much.

The resizable VM can scale up to 2.5TB disk.

Documentation

X6

VMware

The following steps show how to deploy an Infoblox VMware OVA image onto VMware Workstation on Windows 10. Deploying on ESXi should be similar.

  1. Deploy nios-9.0.7-ddi.ova by double clicking it in Windows Explorer.
  2. Accept the terms of the license agreement and click 'Next'.
  3. Pick a name for the VM and click 'Next'.
  4. Select the model of Infoblox Appliance you want to deploy (e.g. TE-815 or TE-825, etc). This is how VMware knows how much RAM and CPU to set on the VM.
  5. You now have an option to pre-populate configuration but ignore this and just click 'Import'.
  6. The machine will deploy and boot. Make sure you update the VM's four network interfaces to be connected to the appropriate VLAN in the hypervisor. Also remember that LAN1 (the interface you access by default) is the second network interface. The first interface is the management interface but that is not used by default. Also remember that, IN A LAB, you get away with reducing the RAM from 16GB to 4Gb (or even 2GB) and from 4CPU to 2CPU (or even 1CPU). (Order of network interfaces is MGMT, LAN1, HA, LAN2). HOWEVER, I've seen cases where you need 8GB because giving the VM less means that it won't connect to the Infoblox Portal.
  7. Log into the VM with the username admin and the password infoblox.
  8. Run the set temp_license command so that we can set temporary 60 day licences. This bit can be skipped if you have actual licences but it is useful in a lab environment.
  9. Select option 2 for “DNSone with Grid (DNS, DHCP, Grid)”. This will restart the web UI and return you to the CLI
  10. Run the set temp_license command again.
  11. Select option 4 for “Add NIOS License” and then select the appropriate VM licence (e.g. option 3 for VIB-815 model)
  12. You will then be returned to the CLI. At this point you need to wait about 30 seconds and then the VM will restart.
  13. Run the set network command to configuration the LAN1 interface IP / subnet and default gateway. (Probably) don't bother with IPv6 and do not become a Grid Member. This will restart the appliance.
  14. If (and only if) you need to use the MGMT interface (not usual in most environments), run the set interface mgmt command to configure the MGMT interface.
  15. You can now access the Web UI by accessing the IP you set. You will not be able to SSH in unless you enable remote console access.

Remember, when deploying VM HA in VMware, you need to update the security settings on the port-group that is used by the Infoblox VM's to accept “MAC address changes” and “Forged transmits”. This is so that VMware allows the VM's to have multiple MAC addresses per vNIC (which is needed for Infoblox HA). More data here.

Remember You can specify temp licences with

enterprise grid dns dhcp rpz cloud_api vnios nios cloud IB-V926

MGMT Port

NIOS appliances come with a MGMT port. By default it is not used.

In general, avoid using MGMT on the GM/GMC boxes. However, if you need to, make sure you enable “Grid Properties > General > Advanced > Enable GUI/API Access via both MGMT and LAN1/VIP” because otherwise the Web UI will move to the MGMT port and live on the active node's MGMT interface. This means that the GM Web UI will change IP when HA failover happens.

When GM in HA. You can SSH to the VIP which is located on HA port of the Active node, the HA interface IP on Active member and LAN1 port on either member. You cannot SSH to HA port IP on Passive member.

If you enable MGMT on HA GM, you can still SSH to LAN1 interfaces and VIP and active HA port - assuming you haven't restricted ssh to MGMT only. You can also SSH to MGMT.

You can configure the MGMT port in the web UI or, for new machines, on the CLI with the following command (except for public cloud instances which have to use the cloud DHCP system to assign the IP address).

set interface mgmt

Before you configure a new node to join a Grid, you must first set a configuration for that node in the Grid. As part of this, you can set a MGMT port configuration.

Regardless of whether the MGMT port is used to connect a member to the GM (see “Enable VPN on MGMT Port” below), the MGMT port, when enabled, will be used for SSH access to the node, SNMP monitoring, SNMP Trap source and email notification source.

Regardless of whether the MGMT port is used to connect a member to the GM (see “Enable VPN on MGMT Port” below), the MGMT port, when enabled, will not be used for DNS lookups made from the member itself (e.g. do to DNS looksups for NTP servers to use) and it will not be used for NTP sync. Both system DNS lookups and NTP egress out of LAN1. The only way around this is to add a static route to the member's configuration. E.G. if the member is set to use 9.9.9.9 for DNS recursion, you can add a static route to 9.9.9.9/32 and set the egress to be the MGMT interface's default gateway.

You cannot configure the MGMT interface to be in the same subnet as the LAN1 interface.

When you configure a node in the Grid, you can set Member>Network>Advanced>“Enable VPN on MGMT Port”.

When you join a node to a Grid:

If a member already has “Enable VPN on MGMT Port” enabled. If you untick it and click okay, after a few seconds both members of the HA pair will fall offline and then both will come back online about a minute or so later having connected back to the GM from the LAN1 port - assuming LAN1 has access to the GM LAN1 VIP. WARNING!!! If you change the status of “Enable VPN on MGMT Port” and the other interface does not have access to the GM VIP, the member will not reconnect and you will need to wipe and re-join the member to the Grid.

KVM

NIOS can be deployed on Proxmox. You need to convert the supplied KVM file and then import into Proxmox using the QCOW2 CLI command. Then it fails, so edit the network interface from E1000 to virtio, then add 4 more interfaces.