This is an old revision of the document!
Table of Contents
NIOS Azure
In Azure, when we enabled MGMT in the GM, the GUI became unavailable because it moved to MGMT and that didn't have a public IP. It could be accessed on the private IP of the MGMT interface from a bastion host. We could, without powering off the GM VM, disassociate the public IP from LAN1 and then associate it with MGMT IP. We could then access the GUI. I did notice that the act of dissociating and then re-associating the public IP cause the IP itself to change.
Serial Console Access In Azure comes as default it seems. That makes it easy to promote a Grid Master.
When deploying vNIOS from marketplace in Azure, the template requires a NGS to be created by the template. You can delete and replace the NSG afterwards but it one has to be created by the template. If you don't have permission to create a NSG, you will need to deploy via CLI, ARM template, Terraform
Azure - Use Premium and not Standard. Use Locally-redundant storage.
Private Link
When configuring forward zone to web.core.windows.net, you will have to untick “Forward only” because the Private Link resolver will respond with authority date for windows.net which is not a subdomain of web-core.windows.net.
HA
Documentation on Azure HA.
X6 in Azure
List of supported Azure models
| Model | VM CPUs | VM Memory | Image | Image for Azure Hub |
|---|---|---|---|---|
| TE-926 | 4 | 32Gb | Standard_E4s_v3 | Standard_E4_v3 |
| TE-1516 | 6 | 64Gb | Standard_E8s_v3 | Standard_E8_v3 |
| TE-1526 | 8 | 64Gb | Standard_E16s_v3 | Standard_E16_v3 |
| TE-2326 | 10 | 192Gb | Standard_E20s_v3 | N/A |
| TE-4126 | 16 | 384Gb | Standard_E32s_v3 | N/A |
| TE-v825/CP-v805 | 2 | 14Gb | Standard_DS11_v2 | N/A |
| TE-v1425/CP-v1425 | 4 | 28Gb | Standard_DS12_v2 | N/A |
| TE-v2225/CP-v2225 | 8 | 56Gb | Standard_DS13_v2 | N/A |
HA not supported on TE-926 because it only has two interfaces. Three needed for HA (MGMT, LAN1, HA)
https://cloudprice.net/vm/Standard_E4s_v3
Limitations and recommendations specific only to Azure Public Cloud:
- Azure Public Cloud does not support IPv6 network configuration.
- Adding or deleting a network interface when a vNIOS for Azure instance is powered on, can result in unexpected behavior. You must first power off the instance, add or delete the interface, and then start the instance.
- vNIOS for Azure Public Cloud instances do not support high availability (HA) configuration on the following virtual machine sizes:
- Standard DS11 v2 that is used in IB-V825 and CP-V805 appliances
- Standard_E4s_v3 that is used in IB-V926 appliances
- vNIOS for Azure Public Cloud instances do not support an HA setup with nodes on different cloud platforms, regions, or hosts.
- Due to a certain restriction from Azure, the Address Resolution Protocol (ARP) functionality on the passive node of an HA pair always remains enabled. It cannot be disabled. Therefore, the passive node always responds to ping requests.
- The time taken for an HA failover can vary depending on the response time from the host.
- vNIOS for Azure does not support automatic upgrade of software (NIOS) on an HA node If the node is running on a version of NIOS that is prior to 9.0.4.
Deploying in Azure
Azure PowerShell
$locName="UK South" $pubName="infoblox" $offername = "infoblox-vm-appliances-903" $skuName="vgsot" $versionName="903.50212.0" Get-AzVMImage -Location $locName -PublisherName $pubName -Offer $offerName -Skus $skuName -Version $versionName
Find Publisher
$locName="UK South" Get-AzVMImagePublisher -Location $locName
Find Offer
$pubName="infoblox" Get-AzVMImageOffer -Location $locName -PublisherName $pubName | Select Offer
infoblox-bloxone-33 infoblox-bloxone-34 infoblox-cp-v1405 infoblox-nios-for-9_0_x-for-ddi infoblox-vm-appliances-853 infoblox-vm-appliances-860 infoblox-vm-appliances-861 infoblox-vm-appliances-862 infoblox-vm-appliances-863 infoblox-vm-appliances-900 infoblox-vm-appliances-901 infoblox-vm-appliances-902 infoblox-vm-appliances-903 infoblox-vm-appliances-904 infoblox-vm-appliances-904-test infoblox-vm-appliances-905 infoblox-vnios-te-v1420 infoblox_nios_payg
Get Skus
$offername = "infoblox-vm-appliances-903" Get-AzVMImageSku -Location $locName -PublisherName $pubName -Offer $offerName | Select Skus
Skus ---- niosprivateoffer vgsot vgsot-ni vsot vsot-ni
Get Version
$skuName="vgsot-ni" Get-AzVMImage -Location $locName -PublisherName $pubName -Offer $offerName -Skus $skuName
DHCP in Azure
NOTE: See end of this section for why DHCP doesn't work properly in Azure still.
Unlike AWS and GCP, Azure uses standard DHCP code for issuing DHCP to VM's in Azure itself. This means they are more sensitive to T1 timers (somehow).
As of 10th March 2024, Azure now support DHCP servers in Azure for on-prem clients. Guide here.
DHCP Server in Azure was previously marked as unsupported since the traffic to port UDP/67 was rate limited in Azure. However, recent platform updates have removed the rate limitation, enabling this capability.
As of 10th March 2024 the only caveat is that DHCP renew doesn't work so DHCP has to expire and be completely re-requested.
The on-premises client to DHCP Server (source port UDP/68, destination port UDP/67) is still not supported in Azure, since this traffic is intercepted and handled differently. So, this will result in some timeout messages at the time of DHCP RENEW at T1 when the client directly attempts to reach the DHCP Server in Azure, but this should succeed when the DHCP RENEW attempt is made at T2 via DHCP Relay Agent. For more details on the T1 and T2 DHCP RENEW timers, see RFC 2131.
OLD DATA BELOW:
NIOS supports DHCP. However, you should not try to use NIOS or BloxOne for DHCP in Azure.
As of 31 Aug 2023…
Can I deploy a DHCP server in a virtual network?
Azure virtual networks provide DHCP service and DNS to VMs and client/server DHCP (source port UDP/68, destination port UDP/67) not supported in a virtual network.
You can't deploy your own DHCP service to receive and provide unicast or broadcast client/server DHCP traffic for endpoints inside a virtual network. Deploying a DHCP server VM with the intent to receive unicast DHCP relay (source port UDP/67, destination port UDP/67) traffic is also an unsupported scenario.
AWS and GCP should be fine. The issue is with Azure itself. Word of mouth anecdote is that Microsoft throttles DHCP requests from on-prem to Azure based DHCP servers to 20 QPS. Its seems the Azure project managers were fine with third party DHCP servers in Azure but the Azure security team was not.
Some unofficial lab testing in Feb 2023 seemed to show that DHCP rate limiting in Azure has been lifted and that it works. However, Microsoft still do not support it as of May 2023.
Why DHCP in Azure is still a bad idea When two DHCP members are configured in a Failover Association (any system using Failover Association) then the first lease given to a client is the MCLT (normally 1 hour). Half way through this (e.g. 30 minutes) it will try and renew. The DHCP server will then issue a full lease. Since Azure blocks T1, this cannot happen. At T2, the client issues a rebind. The DHCP failover association sees the T2 rebind and assumes there is a problem with the peer and so issues MCLT again.
This means that clients can never have full lease time and thus may easily overwhelm a DHCP server. e.g. if you size the DHCP server based on 50,000 devices having 14 day lease times, it probably won't cope with 50,000 devices trying to renew every 30 minutes.
Because DHCP FO uses “lazy” updates, there is a higher chance in this scenario that when one peer goes down it does so after having provided a lease without its peer being informed. That is what MCLT is for, time to get the two peers' DHCP database synced up. In the Azure scenario, it is kind of perpetually in a state where you can't be 100% sure that both peers are aware of all the leases.
Remember, T2 is rebind and not a renewal. A lot of “normal” devices may work with this but many IoT devices do not cope well without T1 timers.
This does not affect Infoblox Universal DDI because it uses a different methodology for DHCP HA.
