====== Deploy Virtual Infoblox Appliance ======
Where possible, deploy all vCPU on same socket. Specifically, if you require more vCPU than you have per physical socket, it is possible that NUMA might cause performance to be sub-optimal. Specifically, it may cause issues if you are using ADP or DCA. If you are not using ADP or DCA, NUMA=1 is not needed as much.
The resizable VM can scale up to 2.5TB disk.
===== Documentation =====
* [[https://docs.infoblox.com/space/NVIG/35786250/About+Infoblox+NIOS+Virtual+Appliance+for+VMware|Vmware]]
* [[https://docs.infoblox.com/space/NMHIG/35850559/vNIOS+Virtual+Appliance+Specifications+for+Microsoft+Hyper-V|Hyper-V]]
* [[https://docs.infoblox.com/space/nutanix/35419387/About+Infoblox+vNIOS+for+Nutanix+AHV|Nutanix AHV]]
* [[https://docs.infoblox.com/space/NKHOIG/35980413/vNIOS+for+KVM+Virtual+Appliance+Models|KVM]]
* [[https://docs.infoblox.com/space/vniosopenshift/35947660/About+Infoblox+vNIOS+for+Red+Hat+OpenShift|RedHat OpenShift]]
* [[https://docs.infoblox.com/space/vniosazure/37486676/Supported+vNIOS+for+Azure+Appliances|Azure]]
* [[https://docs.infoblox.com/space/NAIG/37651026/Infoblox+vNIOS+for+AWS+AMI+Shapes+and+Regions|AWS: (Amazon Web Services)]]
* [[https://docs.infoblox.com/space/vniosgcp/35419609/Supported+vNIOS+for+GCP+Models|GCP (Google Cloud Platform)]]
* [[https://docs.infoblox.com/space/vniosoci/35443534/About+Infoblox+vNIOS+for+Oracle+Cloud+Infrastructure|OCI (Oracle Cloud Infrastructure)]]
===== X6 ======
* When you purchase X6 with Microsoft Management installed, be aware that you can't install Microsoft Management on a NIOS VM that has the CP licence installed. So, although the X6 appliance is licenced for CP, you can't install it if you want to use Microsoft Management.
* You need to install DNS first before installing RPZ and DTC. If you try to install all licences at the same time, it is possible that RPZ and DTC will not install and you just have to re-import those licences and they will install.
===== VMware =====
The following steps show how to deploy an Infoblox VMware OVA image onto VMware Workstation on Windows 10. Deploying on ESXi should be similar.
- Deploy nios-9.0.7-ddi.ova by double clicking it in Windows Explorer.
- Accept the terms of the license agreement and click 'Next'.
- Pick a name for the VM and click 'Next'.
- Select the model of Infoblox Appliance you want to deploy (e.g. TE-815 or TE-825, etc). This is how VMware knows how much RAM and CPU to set on the VM.
- You now have an option to pre-populate configuration but ignore this and just click 'Import'.
- The machine will deploy and boot. Make sure you update the VM's four network interfaces to be connected to the appropriate VLAN in the hypervisor. Also remember that LAN1 (the interface you access by default) is the second network interface. The first interface is the management interface but that is not used by default. Also remember that, **IN A LAB**, you get away with reducing the RAM from 16GB to 4Gb (or even 2GB) and from 4CPU to 2CPU (or even 1CPU). (Order of network interfaces is MGMT, LAN1, HA, LAN2). **HOWEVER**, I've seen cases where you need 8GB because giving the VM less means that it won't connect to the Infoblox Portal.
- Log into the VM with the username ''admin'' and the password ''infoblox''.
- Run the ''set temp_license'' command so that we can set temporary 60 day licences. This bit can be skipped if you have actual licences but it is useful in a lab environment.
- Select option 2 for "DNSone with Grid (DNS, DHCP, Grid)". This will restart the web UI and return you to the CLI
- Run the ''set temp_license'' command again.
- Select option 4 for "Add NIOS License" and then select the appropriate VM licence (e.g. option 3 for VIB-815 model)
- You will then be returned to the CLI. At this point you need to wait about 30 seconds and then the VM will restart.
- Run the ''set network'' command to configuration the LAN1 interface IP / subnet and default gateway. (Probably) don't bother with IPv6 and do not become a Grid Member. This will restart the appliance.
- If (and only if) you need to use the MGMT interface (not usual in most environments), run the ''set interface mgmt'' command to configure the MGMT interface.
- You can now access the Web UI by accessing the IP you set. You will not be able to SSH in unless you enable [[infoblox_nios:remote_console_access|remote console access]].
**Remember**, when deploying VM HA in VMware, you need to update the security settings on the port-group that is used by the Infoblox VM's to accept "MAC address changes" and "Forged transmits". This is so that VMware allows the VM's to have multiple MAC addresses per vNIC (which is needed for Infoblox HA). [[https://www.edge-cloud.net/2013/05/21/infoblox-vnios-ha-pair-vip-unreachable-when-deployed-on-vsphere|More data here]].
**Remember** You can specify temp licences with
enterprise grid dns dhcp rpz cloud_api vnios nios cloud IB-V926
===== MGMT Port =====
NIOS appliances come with a MGMT port. By default it is not used.
In general, avoid using MGMT on the GM/GMC boxes. However, if you need to, make sure you enable "Grid Properties > General > Advanced > Enable GUI/API Access via both MGMT and LAN1/VIP" because otherwise the Web UI will move to the MGMT port and live on the active node's MGMT interface. This means that the GM Web UI will change IP when HA failover happens.
When GM in HA. You can SSH to the VIP which is located on HA port of the Active node, the HA interface IP on Active member and LAN1 port on either member. You cannot SSH to HA port IP on Passive member.
If you enable MGMT on HA GM, you can still SSH to LAN1 interfaces and VIP and active HA port - assuming you haven't restricted ssh to MGMT only. You can also SSH to MGMT.
You can configure the MGMT port in the web UI or, for new machines, on the CLI with the following command (except for public cloud instances which have to use the cloud DHCP system to assign the IP address).
set interface mgmt
Before you configure a new node to join a Grid, you must first set a configuration for that node in the Grid. As part of this, you can set a MGMT port configuration.
Regardless of whether the MGMT port is used to connect a member to the GM (see ''"Enable VPN on MGMT Port"'' below), the MGMT port, when enabled, will be used for SSH access to the node, SNMP monitoring, SNMP Trap source and email notification source.
Regardless of whether the MGMT port is used to connect a member to the GM (see ''"Enable VPN on MGMT Port"'' below), the MGMT port, when enabled, will not be used for DNS lookups made from the member itself (e.g. do to DNS looksups for NTP servers to use) and it will not be used for NTP sync. Both system DNS lookups and NTP egress out of LAN1. The only way around this is to add a static route to the member's configuration. E.G. if the member is set to use 9.9.9.9 for DNS recursion, you can add a static route to 9.9.9.9/32 and set the egress to be the MGMT interface's default gateway.
You cannot configure the MGMT interface to be in the same subnet as the LAN1 interface.
When you configure a node in the Grid, you can set ''Member>Network>Advanced>"Enable VPN on MGMT Port"''.
* You can only set this if the member is NOT a GM and NOT a GMC.
* It is not possible to have a GM or GMC with this option enabled because the GM and GMC appliance MUST use the LAN1 interface for Grid Communications (the UI will throw an error if you enable the option and then make the member a GMC).
* It is possible to have a HA GM/GMC and have their MGMT interfaces enabled.
* If the MGMT interface of the GM is enabled, the Web UI (and thus API access) will be set to the MGMT interface. It is possible to then enable on LAN1/VIP as well with "Enable GUI/API Access via both MGMT and LAN1/VIP". (see above)
* If the MGMT interface of the GM is enabled and the GM is a HA pair, then the Web UI (and thus API access) will be set to the MGMT interface of the ACTIVE member. After HA failoever, web access will be to the other MGMT interface (thus, you will need to access a different IP to get to the Web UI. (unless "Enable GUI/API Access via both MGMT and LAN1/VIP" is used - see above)
* Given the above, don't enable MGMT on a GM HA pair. If the GM is standalone, you can enable MGMT. There is no "Enable VPN on MGMT Port" because it is a GM. Once MGMT is enabled, GUI will, after a short while, change to host the GUI on the MGMT interface and NOT the LAN1 interface. You can still SSH to LAN1. Grid communications will still happen to LAN1 even though the GUI is moved to MGMT.
* An example might be a Grid of two standalone appliances that server DNS. You might enable and use MGMT for UI/API and access because the only time Web UI will switch to the second device is if a GMC Promotion happens. Obviously, this Grid design is not bset practice because a) best practice includes HA and b) best practice seperates the management layer from the service layer.
* Realistically, when you use MGMT, you have GM and GMC located fully inside the MGMT network and only using LAN1 (+HA if needed) and NOT MGMT interfaces. You then have your members using MGMT interfaces to the MGMT network and data services using LAN1 interfaces.
* You can serve DNS recursion services on the MGMT interface of an appliance. If the appliance is a HA pair, the DNS will run on the HA VIP and the MGMT interface of the ACTIVE node. It will not run on the MGMT interface of the PASSIVE node and it will not run on either nodes' LAN1 interface.
When you join a node to a Grid:
* If you have not used the CLI to set the MGMT port on the node, it will use LAN1 to connect to the GM VIP. If the Grid configuration for that node has a MGMT interface configured, the GM will configure the MGMT interface on that member.
* When joining the Grid, if MGMT port is configured, you will be asked ''"Enable grid services on the Management port?"''. This is basically saying "Do you want to use the MGMT port to connect to the GM or do you want to use the LAN1 port to connect to the GM?"
* If the node has LAN1 and MGMT configured but the GM only has LAN1 configured for the node, then you MUST use LAN1 to connect to the GM and the MGMT interface will be de--configured as the node joins the Grid. The MGMT interface on the node cannot be used to join the Grid.
* If you are joining a GMC to the Grid or a secondary node to the GM HA pair, even if MGMT is enabled, you must still use LAN1 to join the GM VIP because MGMT cannot be used for Grid communications on a GM/GMC. This means that you must say "N" when asked ''"Enable grid services on the Management port?"'' when joining a GMC/GM to a Grid even when MGMT is enabled.
* If you have enabled ''"Enable VPN on MGMT Port"'':
* You MUST pre-configure the MGMT interface on the new node before connecting to the Grid. (or the join will fail)
* You MUST say '''y''' when asked by the ''set membership'' wizard "Enable grid services on the Management port?" (or the join will fail)
* If you have NOT enabled ''"Enable VPN on MGMT Port"'':
* Then pre-configuring the MGMT interface is optional.
* You MUST say '''n''' when asked by the ''set membership'' wizard "Enable grid services on the Management port?" (or the join will fail).
* If the GM has MGMT configuration for the node, the MGMT port of the node will be configured as part of the Grid join.
If a member already has "Enable VPN on MGMT Port" enabled. If you untick it and click okay, after a few seconds both members of the HA pair will fall offline and then both will come back online about a minute or so later having connected back to the GM from the LAN1 port - assuming LAN1 has access to the GM LAN1 VIP. WARNING!!! If you change the status of ''"Enable VPN on MGMT Port"'' and the other interface does not have access to the GM VIP, the member will not reconnect and you will need to wipe and re-join the member to the Grid.
===== KVM =====
NIOS can be deployed on Proxmox. You need to convert the supplied KVM file and then import into Proxmox using the QCOW2 CLI command. Then it fails, so edit the network interface from E1000 to virtio, then add 4 more interfaces.