VMware NSX-T Data Center 3.2.1 Release Notes
VMware NSX-T Data Center 3.2.1 Release Notes
1 Release Notes
/ VMware® Cloud Infrastructure Software / VMware NSX / VMware NSX-T Data Center 3.2 / Release Notes
/ VMware NSX-T Data Center 3.2.1 Release Notes
Version 3.2
Introduction
What's New
NSX-T Data Center 3.2.1 provides a variety of new features that offer new functionalities for virtualized networking, security, and
migration from NSX Data Center for vSphere. Highlights include new features and enhancements in the following focus areas.
Federation
Federation brownfield onboarding is again supported. You can promote existing objects on a local manager into the global
manager configuration.
Federation traceflow - Traceflow can now be initiated from global manager and display the different locations the packets are going
through.
Federation higher latency supported - The supported network latency between the following components increases from 150ms
to 500ms round-trip time:
– Between Global Manager active cluster and Global Manager standby cluster
– Between Global Manager cluster and Local Manager cluster
– Between Local Managers clusters across different locations
https://techdocs.broadcom.com/us/en/vmware-cis/nsx/nsxt-dc/3-2/release-notes/Chunk2037192707.html#Chunk2037192707 1/18
6/1/25, 3:46 PM VMware NSX-T Data Center 3.2.1 Release Notes
This offers more flexibility for security use cases. Maximum latency between RTEP is still at 150ms round-trip time for network
stretch use cases.
Edge Platform
NSX Edge Node (VM form factor) - support up to 4 datapath interfaces. With NSX-T 3.2.1, you can add one more datapath
interface for greenfield deployments. For brownfield deployment, you can use NSX redeploy if you need to have more than three
vmnics. For OVF based deploy NSX Edge Node, you need to have four datapath interfaces during the deployment.
Distributed Firewall
Distributed Firewall now supports Physical Server SLES 12 SP5.
Gateway Firewall
TLS 1.2 Inspection was on Tech Preview mode in NSX-T 3.2, and now it is available for production environments in NSX-T
3.2.1. With this feature, Gateway Firewall can decrypt and inspect the payload to prevent any advanced persistent threats.
IDPS (Intrusion Detection and Prevention System) is introduced in NSX-T 3.2.1. With this feature, Gateway Firewall IDPS
detects any attempt to exploit system flaws or gain unauthorized access to systems.
NSX Data Center for vSphere to NSX-T Data Center Migration
Migration Coordinator now supports migrating to NSX-T environments with Edge Nodes deployed with two TEPs for the following
modes: User-Defined Topologies, Migrating Distributed Firewall Configuration, Hosts and Workloads.
Migration Coordinator supports adding hosts during migration for Single Site.
Added support for Cross-vCenter to Federation Migration to Migration Coordinator, including end-to-end and configuration only.
The Migration Coordinator now supports changing Certificate during Migration.
Install and Upgrade
Rolling upgrade of NSX Management Cluster - When upgrading the NSX Management Cluster from NSX-T 3.2.1, you can now
get near-zero downtime of the NSX Management Plane (MP) by using the Rolling Upgrade feature. With this feature, the
maintenance window for MP upgrade gets shortened, and NSX MP API/UI access is up throughout the upgrade process while not
impacting Data Plane workloads, as before.
Install NSX on Bare Metal/Physical Servers as a non-root user - In NSX-T 3.2.1, you can now install NSX on Linux bare
metal/physical servers as a non-root user.
N-VDS to VDS migrator tool
Reintroducing the N-VDS to VDS Migrator Tool enables you to migrate the underlying N-VDS connectivity to NSX on VDS while
keeping workloads running on the hypervisors.
The N-VDS to VDS Migrator Tool now supports migration of the underlying N-VDS connectivity if there are different configurations of
N-VDS with the same N-VDS name.
During NSX deployment on ESX, NSX checks the VDS configured MTU to make sure it can accommodate overlay traffic. If not, the
MTU of the VDS automatically adjusts to NSX Global MTU.
Platform Security
Certificate Management Enhancements for TLS Inspection - With the introduction of the TLS Inspection feature, certificate
management now supports addition and modification of certification bundles and the ability to generate CA certificates to be used
with the TLS Inspection feature. In addition, the general certificate management UI carries modifications that simplify import/export
of certificates.
Available Languages
NSX-T Data Center has been localized into multiple languages: English, German, French, Italian, Japanese, Simplified Chinese,
Korean, Traditional Chinese, and Spanish. Because NSX-T Data Center localization utilizes the browser language settings, ensure that
your settings match the desired language.
https://techdocs.broadcom.com/us/en/vmware-cis/nsx/nsxt-dc/3-2/release-notes/Chunk2037192707.html#Chunk2037192707 2/18
6/1/25, 3:46 PM VMware NSX-T Data Center 3.2.1 Release Notes
Sept 07, 2022 5 Known Issue 2816781 is Fixed; added Known Issue 3025104
Resolved Issues
Fixed Issue 2968705: Global Manager UI shows an error message after upgrading to 3.2.0.1 hot
patch.
After installing the 3.2.0.1.2942942 hot patch, Global Manager UI shows the following error message:
Search framework initialization failed, please restart the service via 'restart service global-manager’.
You are not able to access Global Manager UI after deploying the hot patch.
https://techdocs.broadcom.com/us/en/vmware-cis/nsx/nsxt-dc/3-2/release-notes/Chunk2037192707.html#Chunk2037192707 3/18
6/1/25, 3:46 PM VMware NSX-T Data Center 3.2.1 Release Notes
Fixed Issue 2955949: Controller fails to resubscribe with UFO table after a network disconnection.
New API realization fails because the controller cannot receive new notifications from the UFO table.
Fixed Issue 2931127: For edge VM that is deployed using the NSX Manager UI, you are unable to edit
the Edge Transport Node configuration in the UI.
In the Edit Edge Transport Node window, the DPDK fast path interfaces are not displayed for the uplink when the uplink is associated
with an NSX segment.
Fixed Issue 2949527: VM loses DFW rules if it migrates to a host where opsAgent is not generating
VIF attachment notifications.
If VM was part of a security group with firewall rules applied to it and migrates to a faulty TransportNode, the VM loses DFW rules that
were inherited from a security group. The VM will still have any default DFW rule that the user has configured.
Fixed Issue 2924317: A few minutes after host migration starts, host migration fails with fabric node
creation failure error.
The overlay status on the NSX-T side has to be checked correctly. If not, host migration fails.
Fixed Issue 2937810: The datapath service fails to start and some Edge bridge functions (for example,
Edge bridge port) do not work.
If Edge bridging is enabled on Edge nodes, the Central Control Plane (CCP) sends the DFW rules to the Edge nodes, which should
only be sent to host nodes. If the DFW rules contain a function which is not supported by the Edge firewall, the Edge nodes cannot
handle the unsupported DFW configuration, which causes the datapath to fail to start.
scale: False DPD failures with on_demand dpd(used 3 sec and 10 retries).
Some IPSec setups that have a large number (more than 30) of IKE sessions configured, local edges deployed in active-standby mode
enabled with HA-Sync, and peers having DPD enabled with default settings, some IKE sessions may be torn down by the peer due to
DPD timeout and re-established during the switchover.
3.2.0 > 3.2.1 Upgrade : Datapath Configuration Failure on one of Edge node after edge upgrade,
datapath affected
On the NSX-T Federation setup, when T1 gateways are stretched with DHCP static bindings for downlink segments, MP also creates L2
forwarder ports for the DHCP switch. If a single edge node has two DHCP switches and it was restarted, it caused the failure.
[SR][22308574302][Rijkswaterstaat CIV][S360] Traffic sometimes hits the default deny DFW rule on
VMs across different hosts and clusters since 3.2.0.1 upgrade
The issue is the result of a race condition where two different threads access the same memory address space simultaneously. This
sometimes causes incomplete address sets to be forwarded to transport nodes whose control plane shard is on the impacted controller.
https://techdocs.broadcom.com/us/en/vmware-cis/nsx/nsxt-dc/3-2/release-notes/Chunk2037192707.html#Chunk2037192707 4/18
6/1/25, 3:46 PM VMware NSX-T Data Center 3.2.1 Release Notes
[SR][22304801902][Meghna Group Of Industries][PLT] NSX-T edge Service CPU high and Load
balancer not functioning.
You cannot connect to the backend servers behind the Load Balancer.
PSOD on ESXi hosts during v2T migration due to unexpected VXLAN MTEP packets
PSOD on multiple ESXi hosts during migration. This results in a datapath outage.
[Negative]: NO warning and wrong errors for LTA with packet capture action when /tmp on NSX is full
The LTA PCAP file cannot be downloaded due to the /tmp partition being full.
Viewing Group-1 definition while creating Group-2 adds the Group-1 criteria and members to Group-2
on Apply
You may see that the members are copied from the other group while viewing its members. You can always modify and
unselect/remove those items.
Internal Server Error is returned while fetching effective vif member for Group after the VM vMotion
During VM vMotion, effective VIF membership API (https://{{ip}}/policy/api/v1/infra/domains/:domains/groups/:group/members/vifs)
returns error. API works fine after the VM vMotion is successfully completed. You can use effective VIF membership API after VM
vmotion is completed.
Realized NSService/NSServicegroup objects returned by search doesn't have policypath tag for some
of the objects
If a Service or ServiceEntry is created on the Policy side and retrieved using the Search API, the returned NSServiceGroup or
NSService will not have a policyPath tag that contains the path of the Service or ServiceEntry on the Policy side.
[SR][21290677912][APG Investments Asia Limited][PLT] NSX-T Bare Metal edge upgrade failed when
serial upgrade procedure- showing PNIC down
Following the upgrade reboot, the dataplane service fails to start. The syslogs indicate an error in a python script. For example, 2021-
12-24T15:19:19.274Z HKEDGE02.am-int01.net datapath-systemd-helper 6976 - - fd = file(path) 2021-12-24T15:19:19.296Z
HKEDGE02.am-int01.net datapath-systemd-helper 6976 - - NameError: name 'file' is not defined
On the partially upgraded Edge, the dataplane service is down. There will still be an active Edge in the cluster, but it might be down to a
single point of failure.
[SR][21289303812][Integrated Global Solutions Sdn Bhd][PLT] NSX-T Edge Upgrade Failed while
upgrading to 3.2 with error message
A newly deployed Edge has a Configuration State of Failed with an error. Resulting, Edge datapath to be non-functional.
[Scale 1K HV non-Fed]: Upgrade coordinator page failed to load in UI with error upgrade status listing
failed: Gateway Time-out
You may not be able to navigate and check the upgrade status after starting the large-scale upgrade, since the Upgrade coordinator
page failed to load in UI with an error upgrade status listing failed: Gateway Time-out.
https://techdocs.broadcom.com/us/en/vmware-cis/nsx/nsxt-dc/3-2/release-notes/Chunk2037192707.html#Chunk2037192707 5/18
6/1/25, 3:46 PM VMware NSX-T Data Center 3.2.1 Release Notes
[IMM (upg from HLM IM) + VIO] [Live TB] CCP is not configuring exclude-port GroupingTag on LCP
vm VIF (LSP). Impact:Default security policies are blocking LSP data traffic
CCP might have problems configuring the DFW Exclude list if the upgrade path includes the NSX-T 3.2.0 or 3.2.0.1 release. You will not
be able to see DFW Firewall Exclude List members from the MP side, and you may find the members in the firewall exclude list not
being excluded. One of the entries in the database that the CCP consumes is missing since the internal records were overwritten by the
infra one. This issue does not occur if the customer directly upgrades from the NSX-T 3.0.x or 3.1.x release to the NSX-T 3.2.1 release.
Edge redeploy must raise alarm if it cannot successfully find or delete previous edge VM that's still
connected to MP
With power-off and delete operations through VC failing, Edge redeploy operation may end up with two Edge VMs functioning at the
same time, resulting in IP conflicts and other issues.
[SR][22312101103][VMware Inc.][S360] ISO installation on bare metal edge fails with a black screen
after reboot
Installation of NSX-T Edge (bare metal) may fail during the first reboot after installation is complete on a Dell PowerEdge R750 server
while in UEFI boot mode.
{evpn} maximum routes config in L2VPN EVPN address family causing FRR config in failed state and
no more routing config gets pushed
Routing stops working.
SAP - Policy API not able to fetch inventory, networking or security details: "Index out of sync, please
resync via start search resync policy." - potential regression
In NSX-T 3.x, elastic search has been configured to index IP range data in the format of IP ranges instead of Strings. A specific IP
address can be searched from the configured IP address range for any rule. Although elastic search works fine with existing formats like
IPv4 addresses and ranges, IPv6 addresses and ranges, and CIDRs, it does not support IPv4-mapped IPv6 addresses with CIDR
notation and will raise an exception. This will cause the UI to display an "Index out of sync" error, resulting in data loading failure.
The default VNI pool need to be migrated correctly to IM if the default VniPool's name is changed in
GC/HL
The default VNI Pool was named as "DefaultVniPool" before NSX-T 3.2. The VNI Pool will be migrated incorrectly if it was renamed
prior to the release of NSX-T 3.2. The upgrade or migration will not fail, but the pool data will be inconsistent.
Missing translations on CCP for few IPs for IP range starting with 0.0.0.0
NSGroup with IP range starting with 0.0.0.0, for example “0.0.0.0-255.255.255.0”, has translation issues (missing 0.0.0.0/1 subnet).
NSGroup with IP range “1.0.0.0-255.255.255.0" are unaffected.
[HL-IM Migration] Flows not getting streamed through pubsub channel post migration
When migrating to NSX-T 3.2, the broker endpoint for Pub/Sub subscriptions does not get updated. The subscription stops receiving
flows if the broker IP is incorrect.
Block Nioc config for ENS and ENS_INTR mode for Esx less than 7.0.1
Creating or updating Transport Nodes appears to be successful. However, the actual configuration of the NIOC profile will not be
applied to the datapath, so it will not work.
https://techdocs.broadcom.com/us/en/vmware-cis/nsx/nsxt-dc/3-2/release-notes/Chunk2037192707.html#Chunk2037192707 6/18
6/1/25, 3:46 PM VMware NSX-T Data Center 3.2.1 Release Notes
[BackupRestore VLCM] After restore process, "NSX Install Skipped" shown for host which is added
post backup or on which remove nsx run post backup
Hosts are not in prepared state but the transport node profile is still applied at the cluster.
[PKS scale] Serach APIs are continuously failing after build to build upgrade, user are not able to
deploy any pods.
Depending on the scale of the system, the Search API and the UI are unusable until the re-indexing is complete.
[Defer Request] Federation:Unable to sync the objects to standby GM after changing standby GM
password.
Cannot observe Active GM on the standby GM's location manager, either any updates from the Active GM.
DFW rule applied even after forcefully deleting the manager nsgroup
Traffic may still be blocked when forcefully deleting the nsgroup.
[Backup restore] "Restore is not allowed on 3 node cluster" error displayed in middle of restore
process.
When starting a restore on a multi-node cluster, restore fails and you will have to redeploy the appliance.
[Negative Case] Config import is stuck in IN_PROGRESS state after LM restore; user triggered config
import and immediately restored the LM at the site
UI shows IN_PROGRESS in Global Manager for Local Manager onboarding. Configuration of the restored site cannot be imported.
Incorrect VM counts under both segments for VMs with multiple NIC's
You will see incorrect count of VM connected to Segment on Network Topology. However, actual count of VMs associated with the
Segment will be shown when you expand the VM's node.
[DoubleEncapOffloading ] Sending Double Encap Offloading pkt with ENS enabled; ESX PSOD---
/vmkernel/net/ens/ens_mbuf.h:304
Can result in PSOD.
Consolidated Effective IP Addresses not returning static ips for reference groups
No functional or datapath impact. You will not see the static IPs in the GET consolidated effective membership API for a shadow group.
This is applicable only for a shadow group (also called reference groups).
[Defer][GM Clustering]: Unable to add impactor LM site with Highline Global Manager
There is no functional impact. Config import is not supported in NSX-T 3.2.
Install is Not Allowed/Fails as expected as 6.0.0 is not supported but -> error message is Wrong and
misleading
The error message is shown via the UI and API.
https://techdocs.broadcom.com/us/en/vmware-cis/nsx/nsxt-dc/3-2/release-notes/Chunk2037192707.html#Chunk2037192707 7/18
6/1/25, 3:46 PM VMware NSX-T Data Center 3.2.1 Release Notes
Update on IPV6 manual binding succeeds but the API output is different than previous release
LogicalPort GET API output in NSX-T 3.2 is in expanded format "fc00:0:0:0:0:0:0:1" as compared to NSX-T 3.0 format "fc00::1".
[V2T] [BYOT Single Site] Security groups in Edge Firewall rules are not migrated to T in BYOT config-
only migration.
During migration, there may be a gap until the VMs/VIFs are discovered on NSX-T and are a part of the SGs to which they are
applicable to via static/dynamic memberships. This can lead to traffic being dropped or allowed contrary to the Edge Firewall rules in the
period between the North/South Cutover (N/S traffic going through NSX-T gateway) until the end of the migration.
[Documentation] v2t-fixed-topology : After OSPF migration , user has to manually sync the MTUs for
OSPF to be in FULL state
During NSX for vSphere to NSX-T migration, the MTU is not automatically migrated so a mismatch can impact dataplane during
North/South Edge cutover. OSPF adjacency is stuck in Exstart state.
[IM MP API] [Live TB] MP updating LB pool status to UP after ~1.5hrs, Edge had LB, virtual server,
pool, members, hm in up state within 3 minutes of creation.
Wrong load balancer status.
[FED:UPGRADE]: Post upgrade, new default switching profiles are getting added
No functional impact.
Datapath outage in InterTep deployments where Edge and ESX TN use same VLAN ID
The north-south traffic between workloads VMs on the ESX and the Edge stops working as ESX drops packets that are destined for the
Edge VTEP.
[Scale: 128 HV, 4 Sites] 4 out of 202 groups not getting deleted from GM - in greyed out state in GM
after ~ 4 hours.
This happens only if that Group is a reference group on a Local Manager site. No functional impact; however, you cannot create another
Group with the same policypath as the deleted group.
NSX-T 3.2 : After deletion of edges some stale entries of edge nodes are still present in nsx-t UI.
Though Edge VM will be deleted, stale edge intent and internal objects will be retained in the system and delete operation will be retried
internally. No functionality impact, as Edge VMs are deleted and only intent has stale entries.
[CloudNative 3.1.3 to 3.2.0 CSM Upgrade] - CSM upgrade failed at run_csm_migration_tool step, fix
migration code for regionToVhdURLMap
Following the usual workflow will erase all CSM data.
[302->312 Upgrade] MP NSService and NSService Group tags are missing post upgrade
There is no functional impact on NSX as Tags on NSService and NSServiceGroup are not being consumed by any workflow. There may
be an impact on external scripts that have workflows that rely on Tags on these objects.
https://techdocs.broadcom.com/us/en/vmware-cis/nsx/nsxt-dc/3-2/release-notes/Chunk2037192707.html#Chunk2037192707 8/18
6/1/25, 3:46 PM VMware NSX-T Data Center 3.2.1 Release Notes
Change in effective member API response for policy group having no effective members
There is no functional impact.
Lower case netbios domain name will break IDFW runtime reporting (user_id blank in
get_user_session_data API)
You cannot easily and readily obtain IDFW current runtime information/status. Current active logins cannot be determined.
Realization errorcode(1201) is not correctly converted error message([Error Code = '1201', Error
Message = '', Affected Entities = '[]'.] ) for KVM jumpto realization error
There is no functional impact.
[defer-requested][NSXCloud 303 to 320 PostUpgrade] - Redeploy active PCG fails with error - Failed
to create transport Node - required property transport_node.node_deployment_info.node_settings
is missing
This happens only post upgrade in the first redeployment of PCGs.
Logging in with an "auditor" role username doesn't render any pages post login success
Local user is not able to log in.
[3.1.2->3.2.0 upgrade] NSGroup on MP does not show the membership_criteria and members of GM
created Groups
No functional impact.
[Defer requested]LTA trace observation misses the first packet when the traffic traversing through
Edge
You cannot see the first packet trace.
[[FED:UPGRADE:312-320]: Upgrade orchestrator node for LM site change is not showing notification
If the orchestrator node is changed after the UC is upgraded and you continue with the UI workflow by clicking any action button (pre-
check, start, etc.), you will not see any progress on the upgrade UI. This is only applicable if the Local Manager Upgrade UI is accessed
in the Global Manager UI using site switcher.
The service deployment status resolve alarm not resolved incase of GI+SI solution (Trend) when we
move host to another cluster and exit it from maintanence mode.
You may see that Service Deployment status remains in Down state forever along with the alarm that was raised.
Fixed Issue 2816781: Physical servers cannot be configured with a load-balancing based teaming
policy as they support a single VTEP.
You won't be able to configure physical servers with a load-balancing based teaming policy.
https://techdocs.broadcom.com/us/en/vmware-cis/nsx/nsxt-dc/3-2/release-notes/Chunk2037192707.html#Chunk2037192707 9/18
6/1/25, 3:46 PM VMware NSX-T Data Center 3.2.1 Release Notes
Fixed Issue 2879119: When a virtual router is added, the corresponding kernel network interface does
not come up.
Routing on the vrf fails. No connectivity is established for VMs connected through the vrf.
Known Issues
Issue 3145439: Rules with more than 15 ports would be allowed to publish only to fail in later stages.
You may not know that the rule fails to publish / realize for this reason.
Workaround: Break the set of ports / PortRanges into multiple and publish those with smaller set of Ports / PortRanges.
Issue 3152512: Missing firewall rules after the upgrade from NSX 3.0.x or NSX 3.1.x to NSX 3.2.1 can be
observed on the edge node when a rule is attached to more than one gateway/logical router.
Traffic does not hit the correct rule in the gateway firewall and will be dropped.
Workaround: Republish the Gateway Firewall rule by making a configuration change (for example, a name change).
Issue 3116294: Rule with nested group does not work as expected on hosts.
Traffic not allowed or skipped correctly.
Workaround: See knowledge base article 91421.
Issues 3046183 and 3047028: After activating or deactivating one of the NSX features hosted on the
NSX Application Platform, the deployment status of the other hosted NSX features changes to In
Progress. The affected NSX features are NSX Network Detection and Response, NSX Malware
Prevention, and NSX Intelligence.
After deploying the NSX Application Platform, activating or deactivating the NSX Network Detection and Response feature causes the
deployment statuses of the NSX Malware Prevention feature and NSX Intelligence feature to change to In Progress. Similarly,
activating and deactivating the NSX Malware Prevention feature causes the deployment status of the NSX Network Detection and
Response feature to In Progress. If NSX Intelligence is activated and you activate NSX Malware Prevention, the status for the NSX
Intelligence feature changes to Down and Partially up.
Workaround: None. The system recovers on its own.
Issue 2983892: The Kubernetes pod associated with the NSX Metrics feature intermittently fails to
detect the ingress traffic flows.
When the Kubernetes pod associated with the NSX Metrics feature intermittently fails to detect the ingress traffic flows, the ingress
metrics data does not get stored. As a result, the missing data affects the metrics data analysis performed by other NSX features, such
as NSX Intelligence, NSX Network Detection and Response, and NSX Malware Prevention.
Workaround: Ask your infrastructure administrator to peform the following steps.
1. Log in to the Kubernetes pod associated with the NSX Metrics feature and run the following command at the system prompt.
2. Change the network policy from allow-traffic-to-contour to allow-traffic-to-all and save the changes.
After the Kubernetes pod restarts, the NSX Metrics feature should be collecting and storing the ingress traffic flows data correctly.
https://techdocs.broadcom.com/us/en/vmware-cis/nsx/nsxt-dc/3-2/release-notes/Chunk2037192707.html#Chunk2037192707 10/18
6/1/25, 3:46 PM VMware NSX-T Data Center 3.2.1 Release Notes
Issue 2931403: Network interface validation prevents API users from performing updates.
An Edge VM network interface can be configured with network resources such as port groups, VLAN logical switches, or segments that
are accessible for specified compute and storage resources. Compute-Id regroup moref in intent is stale and no longer present in VC
after a power outage (moref of resource pool changed after VC was restored). API users are blocked from performing update
operations.
Workaround: Redeploy edge and specify valid moref Ids.
Issue 3044773: IDPS Signature Download will not work if NSX Manager is configured with HTTPS
Proxy with certificate.
IDPS On-demand and Auto signature download will not work.
Workaround: Configure with HTTP proxy server (with scheme HTTP). Note scheme HTTP/HTTPS means the connection type it
established between NSX manager and proxy. Or use the IDPS Offline upload Process.
Issue 2962718: A bond management interface can lose members when Mellanox NICs are used on
bare metal edge.
The management interface lost connection with the edge after a reboot. A Mellanox interface was configured as one of the bond slaves.
Workaround: Stop the dataplane service before configuring the bond.
Issue 2965357: When N-VDS to VDS migration runs simultaneously on more than 64 hosts, the
migration fails on some hosts.
As multiple hosts try to update the vCenter Server simultaneously, the migration fails during the TN_RECONFIG_HOST stage.
Workaround: Trigger migration on <= 64 hosts.
Issue 2990741: After upgrading to NSX-T 3.2.x, search functionality does not work in the NSX
Manager UI.
NSX Manager UI shows the following error message:
Search service is currently unavailable, please restart using 'start service search'.
Workaround: Run the following CLI commands on the impacted NSX Manager nodes:
restart service search
restart service policy
Issue 2991201: After upgrading NSX Global Manager to 3.2.1.x, Service entries fail to realize.
Existing Distributed Firewall rules that consume these Services do not work as expected.
Workaround: Do a dummy update of the Service entry by following these steps:
1. Take a backup.
2. Run a GET API to retrieve the Service entry details
3. Update the Service entry without changing the PUT payload as follows:
PUT https://<manager_ip>/policy/api/v1/infra/services/<service-id>/service-entries/<service-entry-id>
Example:
PUT https://<manager_ip>/policy/api/v1/infra/services/VNC/service-entries/VNC
{
"protocol_number": 34,
"resource_type": "IPProtocolServiceEntry",
"id": "VNC",
"display_name": "VNC",
"path": "/infra/services/VNC/service-entries/VNC",
"relative_path": "VNC",
"parent_path": "/infra/services/VNC",
https://techdocs.broadcom.com/us/en/vmware-cis/nsx/nsxt-dc/3-2/release-notes/Chunk2037192707.html#Chunk2037192707 11/18
6/1/25, 3:46 PM VMware NSX-T Data Center 3.2.1 Release Notes
"unique_id": "0c505596-b9ed-4670-a398-e973dc1e57b4",
"realization_id": "0c505596-b9ed-4670-a398-e973dc1e57b4",
"marked_for_delete": false,
"overridden": false,
"_system_owned": false,
"_create_time": 1655870419829,
"_create_user": "admin",
"_last_modified_time": 1655870419829,
"_last_modified_user": "admin",
"_protection": "NOT_PROTECTED",
"_revision": 0
}
Issue 2992759: Prechecks fail during NSX Application Platform 4.0.1 deployment on NSX-T versions
3.2.0/3.2.1/4.0.0.1 with upstream K8s v1.24.
The prechecks fail with the following error message:
"Kubernetes cluster must have minimum 1 ready master node(s)."
Workaround: None.
Issue 2992964: During NSX-V to NSX-T migration, edge firewall rules with local Security Group
cannot be migrated to NSX Global Manager.
You must migrate the edge firewall rules that use a local Security Group manually. Otherwise, depending on the rule definitions (actions,
order, and so on), traffic might get dropped during edge cutover.
Workaround: See VMware knowledge base article https://kb.vmware.com/s/article/88428.
Issue 2994424: URT generated multiple VDS for one cluster if named teaming of transport nodes in
the cluster are different.
Transport nodes with different named teaming were migrated to different VDSes, even if they are in the same clusters.
Workaround: None.
Issue 3004128: Edit Edge Transport Node window does not display uplinks from Named Teaming
policies or Link Aggregation Groups that are defined in the uplink profile.
You cannot use uplinks and map them to Virtual NICs or DPDK fast path interfaces.
Workaround: None from UI. Add/Edit Edge Transport Node can be done using REST APIs.
Issue 2936504: The loading spinner appears on top of the NSX Application Platform's monitoring
page.
When you view the NSX Application Platform page after the NSX Application Platform is successfully installed, the loading spinner is
initially displayed on top of the page. This spinner might give the impression that there is some connectivity issue occuring when there is
none.
Workaround: As soon as the NSX Application Platform page is loaded, refresh the Web browser page to clear the spinner.
Issue 2949575: Powering off one Kubernetes worker node in the cluster puts the NSX Application
Platform in a degraded state indefinitely.
After one Kubernetes worker node is removed from the cluster without first draining the pods on it, the NSX Application Platform is
placed in a degraded state. When you check the status of the pods using the kubectl get pod -n nsxi-platform command, some
pods display the Terminating status, and have been in that status for a long time.
Workaround: Manually delete each of the pods that display a Terminating status using the following information.
1. From the NSX Manager or the runner IP host (Linux jump host from which you can access the Kubernetes cluster), run the following
command to list all the pods with the Terminating status.
https://techdocs.broadcom.com/us/en/vmware-cis/nsx/nsxt-dc/3-2/release-notes/Chunk2037192707.html#Chunk2037192707 12/18
6/1/25, 3:46 PM VMware NSX-T Data Center 3.2.1 Release Notes
Issue 3012313: Upgrading NSX Malware Prevention or NSX Network Detection and Response from
version 3.2.0 to NSX ATP 3.2.1 or 3.2.1.1 fails.
After the NSX Application Platform is upgraded successfully from NSX 3.2.0 to NSX ATP 3.2.1 or 3.2.1.1, upgrading either the NSX
Malware Prevention (MP) or NSX Network Detection and Response (NDR) feature fails with one or more of the following symptoms.
1. The Upgrade UI window displays a FAILED status for NSX NDR and the cloud-connector pods.
2. For an NSX NDR upgrade, a few pods with the prefix of nsx-ndr are in the ImagePullBackOff state.
3. For an NSX MP upgrade, a few pods with the prefix of cloud-connector are in the ImagePullBackOff state.
4. The upgrade fails after you click UPGRADE, but the previous NSX MP and NSX NDR functionalities still function the same as
before the upgrade was started. However, the NSX Application Platform might become unstable.
Workaround: See VMware knowledge base article 89418.
Issue 3025104: Host showing "Failed " state when restore performed on different IP and same FQDN.
When restore is performed using different IP of MP nodes using the same FQDN, hosts are not able to connect to the MP nodes.
Workaround: Refresh DNS cache for the host using command : /etc/init.d/nscd restart
Issue 2989696: Scheduled backups fail to start after NSX Manager restore operation.
Scheduled backup does not generate backups. Manual backups continue to work.
Workaround: See Knowledge base article 89059.
Federation: GM APH-AR cert replaced with new cert disconnects Syncing with all LM's when GM is
single Node Cluster
This issue is seen only with NSX Federation and with the single node NSX Manager Cluster. The single-node NSX Manager will
disconnect from the rest of the NSX Federation environment if you replace the APH-AR certificate on that NSX Manager.
Workaround: Single-node NSX Manager cluster deployment is not a supported deployment option, so have three-node NSX Manager
cluster.
ISAKMP Packets are not going out from Edge after intermediate router powerd off and on.
There could be outage for specific IPsec route based session.
Workaround: Enable/disable on IPsec session can resolve the problem.
Second IPsec session(using different LEP) configuration failed which is using same certs of first
session
Failed IPsec session will not be established until the error is resolved.
Workaround: Use unique self-signed certificate for each local endpoint.
Initial containers spawn time is larger (10+ min) than earlier, which cause event miss on UI (no MPS
event seen on UI until all containers are up)
When the Malware Prevention feature is configured for the first time, it can take up to 15 minutes for the feature to be initialized. During
this initialization, no malware analysis will be done, but there is no indication that the initialization is occurring.
Workaround: Wait 15 minutes.
[defer-accept] feedback needed - multiple DFW sections will be created on NSX-T when DFW
section with more than 1000 rules are migrated from NSX-V.
UI feedback is not shown.
https://techdocs.broadcom.com/us/en/vmware-cis/nsx/nsxt-dc/3-2/release-notes/Chunk2037192707.html#Chunk2037192707 13/18
6/1/25, 3:46 PM VMware NSX-T Data Center 3.2.1 Release Notes
[v2t alb lift and shift] The SE engine is not able to connect to avi controller when V has explicit rule to
block port 8443 used by SE
Avi search engine is not able to connect to the Avi Controller.
Workaround: Add explicit DFW rules to allow ports 22, 443, 8443 and 123 for SE VMs or exclude SE VMs from DFW rules.
v2t alb lift and shift: the ns groups created while migration has mgmt/ipv6 address as pool members,
though on V side pool members had vm name
You will see a higher pool member count. Health monitor will mark those pool members down but traffic won't be sent to unreachable
pool members.
Workaround: None.
Alarms not generated after adding node having missing forward or reverse lookup entry in DNS or
dns entry to cluster wtih publish FQQN true
Forward/Reverse alarms are not generated for the joining node even though forward/reverse lookup entry is missing in DNS server or
dns entry is missing for the joining node.
Workaround: Configure the external DNS server for all Manager nodes with forward and reverse DNS entries.
[ReleaseNotes] From VC removing DVS with version of 7.0.3 failed after cluster is prepared with
security-only NSX
You may have to resolve any issues in transport node or cluster configuration that arise from a host being removed from DVS or DVS
deletion.
Workaround: None.
Failed to enable logging for all security rules using security policy API
You will not be able to change the logging of all rules by changing "logging_enabled" of security policy.
Workaround: Modify each rule to enable/disable logging.
[NSXCloud][Azure] Agent failed to start after install on SLES 12sp4 with accelerated networking
enabled.
VM agent doesn't start and VM becomes unmanaged.
Workaround: Disable Accelerated networking.
v2t alb lift and shift: while doing migration seeing Error at L3 stage and above: Client 'admin'
exceeded request rate of 100 per second (Error code: 102) frequently
The NSX rate limiting of 100 requests per second is reached when we migrate a large number of VS from NSX for vSphere to NSX-T
ALB and all APIs are temporarily blocked.
Workaround: Update Client API rate limit to 200 or more requests per second.
Note: There is fix on AVI version 21.1.4 release.
Reset local user password upon its expiry failed if vIDM config is enabled
You are unable to change local user passwords while vIDM is enabled.
Workaround: vIDM configuration must be (temporarily) disabled, the local credentials reset during this time, and then integration re-
enabled.
https://techdocs.broadcom.com/us/en/vmware-cis/nsx/nsxt-dc/3-2/release-notes/Chunk2037192707.html#Chunk2037192707 14/18
6/1/25, 3:46 PM VMware NSX-T Data Center 3.2.1 Release Notes
All Effective VMs and IPs not shown correctly when multiple AD users login to different VMs for a
policy group having an identity member with multiple AD users or multiple AD groups
Effective members of AD group not displayed. No datapath impact.
Workaround: None.
get controllers output can show out of date non-master controllers if cluster membership changes
and master does not
This CLI output is confusing.
Workaround: Restart nsx-proxy on that TN.
Lcore priorities may remain high even when they are unused, potentially starving future low priority
VMs
Performance degradation for "Normal Latency" VMs.
Workaround: There are two options.
Reboot the system.
Remove the high priority LCores and then recreate them. They will then default back to normal priority LCores.
Observing continuous FIB update messages at the standby edge of the A/S tier0.
Traffic drop for the prefixes that are getting continuously added/deleted.
Workaround: Add an inbound routemap that filters the BGP prefix which is in the same subnet as the static route nexthop.
Auto generated EVPN Child Segment's realization status failed for after creation of EVPN Tenant
Config, and changed status to success after ~5mins
It will take 5 minutes to realize the EVPN tenant configuration.
Workaround: None. Wait 5 minutes.
HTTPS,Controller and Manager services not coming up for newly added node in cluster setup (publish
fqdn is true on cluster)
The joining manager will not work and the UI will not be available.
Workaround: Configure the external DNS server with forward and reverse DNS entries for all Manager nodes.
vIDM Config: Unable to disable the External LB configuration in the VIDM configuration page.
After enabling VMware Identity Manager integration on NSX with "External LB", if you attempt to then disable integration by switching
"External LB" off, after about a minute, the initial configuration will reappear and overwrite local changes.
Workaround: When attempting to disable vIDM, do not toggle the External LB flag off; only toggle off vIDM Integration. This will cause
that config to be saved to the database and synced to the other nodes.
Topo5a EW routing
In cases where vRA creates multiple segments and connects to a shared ESG, migration from NSX for vSphere to NSX-T will convert
such a topology to a shared Tier-1 connected to all segments on the NSX-T side. During the host migration window, intermittent traffic
loss might be observed for E-W traffic between workloads connected to the segments sharing the Tier-1.
Workaround: None.
[Waiting on microsoft SR] Azure: Acc Networking enabled CentOS VM is inaccessible after agent
installation
In Azure when accelerated networking is enabled on RedHat or CentOS based OS's and with NSX Agent installed the ethernet interface
does not obtain an IP address.
Workaround: Disable accelerated networking for RedHat and CentOS based OS.
[Scale] - Pagination support required to eliminate timeout of API to get route/fwd tables from LR
If the edge has 65k+ routes for RIB and 100k+ routes for FIB, the request from MP to Edge takes more than 10 seconds and results in a
timeout. This is a read-only API and has an impact only if they need to download the 65k+ routes for RIB and 100k+ routes for FIB using
API/UI.
Workaround: There are two options to fetch the RIB/FIB. These APIs support filtering options based on network prefixes or type of
route. Use these options to download the routes of interest. CLI support in case the entire RIB/FIB table is needed and there is no
https://techdocs.broadcom.com/us/en/vmware-cis/nsx/nsxt-dc/3-2/release-notes/Chunk2037192707.html#Chunk2037192707 15/18
6/1/25, 3:46 PM VMware NSX-T Data Center 3.2.1 Release Notes
OSPF HL Maint: General Error seen with Download OSPF routes option with scale OSPF routes
present in setup
These Policy APIs for the OSPF database and OSPF routes return an error if the edge has 6K+ routes: /tier-0s/<tier-0s-id>/locale-
services/<locale-service-id>/ospf/routes /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/routes?format=csv /tier-0s/<tier-0s-
id>/locale-services/<locale-service-id>/ospf/database /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/database?format=csv
If the edge has 6K+ routes for Database and Routes, the Policy API times out. This is a read-only API and has an impact only if the
API/UI is used to download 6k+ routes for OSPF routes and database.
Workaround: Use the CLI commands to retrieve the information from the edge.
[Scale 1000HV] Need to revise max rules validation (per edge) for system generated GW firewall rules
for VPN
NSX claims support of 512 VPN Sessions per edge in the large form factor, however, due to Policy doing auto plumbing of security
policies, Policy will only allow a maximum of 500 VPN Sessions. Upon configuring the 501st VPN session on Tier0, the following error
message is shown: {'httpStatus': 'BAD_REQUEST', 'error_code': 500230, 'module_name': 'Policy', 'error_message': 'GatewayPolicy
path=[/infra/domains/default/gateway-policies/VPN_SYSTEM_GATEWAY_POLICY] has more than 1,000 allowed rules per Gateway
path=[/infra/tier-0s/inc_1_tier_0_1].'}
Workaround: Use Management Plane APIs to create additional VPN Sessions.
[NSX-T] Unable to upgrade NSX-T from version 2.4.1 to 2.5.1 due to large CRL objects
Unable to upgrade.
Workaround: Replace certificate with a certificate signed by a different CA.
[Release Notes] dfw default rule not logged after upgrade from 6.5 to 7.0 then install dvpg
NSX security features are not enabled on the the VMs connected to VDS upgraded from 6.5 to a higher version (6.6+) where NSX
Security on vSphere DVPortgroups feature is supported.
Workaround: After VDS is upgraded, reboot the host and power on the VMs to enable security on the VMs.
L2VPN alarms are not in resolve state even if VPN session is up and running in impactor
build#17836416
No functional impact except that unnecessary open alarms are seen.
Workaround: Resolve alarms manually.
[defer] AR channel port - certificate attributes are not periodically checked for expiry or revocation
The connection would be using an expired/revoked SSL.
Workaround: Restart the APH on the Manager node to trigger a reconnection.
GM created global dns/session/flood profiles cannot be applied to a local group from UI but can be
applied from API
Global DNS, session, or flood profiles created on Global Manager cannot be applied to a local group from UI, but can be applied from
API. Hence, an API user can accidentally create profile binding maps and modify global entity on Local Manager.
Workaround: Use the UI to configure system.
[vLCM Remediation - Hostbased SI] [Release Note]Remediating a Host in a vLCM cluster with
Hostbased Deployment fails at 95%
The remediation progress for a Host will be stuck at 95% and then Fail after 70 minute timeout is completed.
https://techdocs.broadcom.com/us/en/vmware-cis/nsx/nsxt-dc/3-2/release-notes/Chunk2037192707.html#Chunk2037192707 16/18
6/1/25, 3:46 PM VMware NSX-T Data Center 3.2.1 Release Notes
[NSXCloud][Cloud Native Upgrade][3136to321] CSM is not accessible after MPs are upgraded.
When MP is upgraded, the CSM appliance is not accessible from the UI until the CSM appliance is upgraded completely. NSX services
on CSM are down at this time. It's a temporary state where CSM is inaccessible during an upgrade. The impact is minimal.
Workaround: This is an expected behavior. You have to upgrade the CSM appliance to access CSM UI and ensure all services are
running.
Lot of pods are not showing in the output of "kubectl top pods -n nsxi-platform"
The output of "kubectl top pods -n nsxi-platform" will not list all pods for debugging. This does not affect deployment or normal
operation. For certain issues, debugging may be affected. There is no functional impact. Only debugging might be affected.
Workaround: There are two workarounds:
Workaround 1: Make sure the Kubernetes cluster comes up with version 0.4.x of the metrics-server pod before deploying NAPP
platform. This issue is not seen when metrics-server 0.4.x is deployed.
Workaround 2: Delete the metrics-server instance deployed by the NAPP charts and deploy upstream Kubernetes metrics-server
0.4.x.
[Release Notes][DFWonDVPG]vMotion is still allowed when NSX CCP is down and removes all DFW
policies when performed
For clusters installed with the NSX Security on vSphere dvPortGroups feature, VMs that are vMotioned to hosts connected to a downed
NSX Manager do not have their DFW and security rules enforced. These security settings are re-enforced when connectivity to NSX
Manager is re-established.
Workaround: Avoid vMotion to affected hosts when NSX Manager is down. If other NSX Manager nodes are functioning, vMotion the
VM to another host that is connected to a healthy NSX Manager.
[Scale Security 512 HV]: login logout event of idfw logscrapper not getting generated from AD event
source
The identity firewall issues a recursive Active Directory query to obtain the user's group information. Active Directory queries can time
out with a NamingException 'LDAP response read timed out, timeout used: 60000 ms'. Therefore, firewall rules are not populated with
event log scraper IP addresses.
Workaround: To improve recursive query times, Active Directory admins may organize and index the AD objects.
File types are getting truncated at 12 characters hence file type filters is not working
On the Malware Prevention dashboard, when you click to see the details of the inspected file, you will see incorrect data because the
file type will be truncated at 12 characters. For example, for a file with File Type as WindowsExecutableLLAppBundleTarArchiveFile, you
https://techdocs.broadcom.com/us/en/vmware-cis/nsx/nsxt-dc/3-2/release-notes/Chunk2037192707.html#Chunk2037192707 17/18
6/1/25, 3:46 PM VMware NSX-T Data Center 3.2.1 Release Notes
3.2.1: when segment is created from policy and bridge configured from MP, on that segment Detach
bridging option is not available
You will not be able to detach or update bridging from UI if Segment is created from policy and Bridge is configured from MP.
If a Segment is created from the policy side, you are advised to configure bridging only from the policy side. Similarly, if a Logical Switch
is created from the MP side, you should configure bridging only from the MP side.
Workaround: You need to use APIs to remove bridging:
1. Update concerned LogicalPort and remove attachment
PUT :: https://<mgr-ip>/api/v1/logical-ports/<logical-port-id> Add this to headers in PUT payload headers field -> X-Allow-Overwrite :
true
2. DELETE BridgeEndpoint
DELETE :: https://<mgr-ip>/api/v1/bridge-endpoints/<bridge-endpoint-id>
3. Delete LogicalPort
DELETE :: https://<mgr-ip>/api/v1/logical-ports/<logical-port-id>
v2t-byot-federation : [2679424] : Host migration failed when MC restart was triggered while a host
migration was in progress
After the restart of the MC service, all the selections relevant to host migration such as enabling or disabling clusters, migration mode,
cluster migration ordering, etc., that were made earlier are reset to default values.
Workaround: Ensure that all the selections relevant to host migration are performed again after the restart of the MC service.
Products
Solutions
Company
How To Buy
Copyright © 2005-2025 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.
https://techdocs.broadcom.com/us/en/vmware-cis/nsx/nsxt-dc/3-2/release-notes/Chunk2037192707.html#Chunk2037192707 18/18