VVolsCookBook 1.06RC
VVolsCookBook 1.06RC
Abstract
This document is meant to get VVols on NetApp beta testers started with VVols. All content is
subject to NDA between authorized testers and VMware and NetApp under separate
nondisclosure agreements (NDA). Assumptions are made that the tester has familiarity with
VMware vSphere concepts and deployment. This information subject to change.
TABLE OF CONTENTS
4 VVol datastores................................................................................................................................... 24
4.1 Create a VVol datastore ................................................................................................................................24
5 Managing VMs..................................................................................................................................... 26
5.1 Creating VMs with VVols using VM Storage Policies ....................................................................................26
6 Advanced Features............................................................................................................................. 26
6.1 Deduplication ................................................................................................................................................26
7 Troubleshooting ................................................................................................................................. 27
7.1 Logs and tools...............................................................................................................................................27
8 FAQ ...................................................................................................................................................... 31
2 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
References ................................................................................................................................................. 32
LIST OF TABLES
Table 1) VVol types and implementation ........................................................................................................................6
Table 2) Testing VVols on Clustered Data ONTAP hardware requirements. .................................................................9
Table 3) Testing VVols on Clustered Data ONTAP software requirements. .................................................................10
Table 4) Entity DNS and IP list .....................................................................................................................................11
LIST OF FIGURES
Figure 1) vCenter, VSC, VASA VP, ESXi servers and clustered Data ONTAP. .............................................................8
3 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
1 Solution Overview
These instructions assume a working knowledge of a previous version of VMware vSphere, including
deploying OVF/OVA appliances, creating VMs, mounting ISO images as a CD/DVD for a VM and similar
tasks. Some detailed steps are omitted, but steps that vary from normal procedures are explicit.
DiskTypes Multi-select: SATA, FCAL, SAS, Aggr must consist of disks of the
SATA, FCAL, SSD, Any specified type
SAS, SSD
Flash Accelerated Yes, No Yes, No, Any FlashCache cards installed in node
hosting the containing aggr, OR
FlashPool aggr containing SSD and
another disk type, and aggregate
4 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
Capability VM Stg Policy SCP Values Requirements and Notes
Values
setting is-hybrid=true
MaxThroughput_IOPS Numeric Number then select QOS on the FlexVol with IOPS limit
IOPS or MBPS
MaxThroughput_MBPS Numeric Number then select QOS on the FlexVol with throughput
IOPS or MBPS limit
A set of capabilities for a volume or set of volumes is called a Storage Capability Profile (SCP). SCPs are
created and managed using VSC. The VP surfaces SCPs to vCenter, as well as individual capabilities.
VMs that use VVols are created using VM Storage Policies. VM Storage Policies must be created by the
vSphere admin to map NetApp capabilities to one or more VM Storage Policies. Capabilities can be
mapped into VM Storage Policies either as an SCP or as individual capabilities or both. If both are used,
the individual capability selected in the VM Storage Policy will override the capability selection in the SCP.
For example, if a VM storage policy includes both an SCP that requires deduplication, and the separate
deduplication capability with the setting of “no”, the resulting policy requires a FlexVol without
deduplication. The create VM Storage Policy wizard lists compatible and incompatible storage after
selecting capabilities and/or a profile.
The other big aspect of the VM granular management architecture is the change in storage objects.
Traditional datastores are either VMFS file systems created on LUNs or storage controller file systems
presented as NFS mounts. Within these datastores, a VM has a directory with a set of files. The virtual
disks are large files containing a disk image. There are also VM swap files, configuration files, logs and
others.
In the NetApp implementation of VVols, a VVol datastore consists of one or more FlexVol volumes within
a storage container (also called “Backing Storage”). A storage container is simply a set of FlexVol
volumes used for VVol datastores. All the FlexVols within a storage container must be accessed using
the same protocol (NFS, iSCSI, or FCP) and be owned by the same Storage Virtual Machine (SVM,
formerly called Vserver), but they can be hosted on different aggregates and nodes of the NetApp cluster.
FlexVols can be created outside of the VSC workflows or as part of the new VVol datastore wizard.
However, all LUN and other VVol-related objects are created and managed by the VP.
A VVol is either a LUN when used with block protocols or a file or directory with NFS. A VVol LUN is not
mapped (masked in common SAN terminology) to storage in the sense of traditional LUNs.
5 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
Table 2) VVol types and implementation
Memory LUN, size of virtual File, size of virtual Only created if memory
memory memory snapshot is selected
during running VM
snapshot.
6 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
Figure 1) LUN protocol endpoints
For NFS, a PE is a mount point to the root of the SVM. A PE is created by the VP for each data LIF of
the SVM using the LIF’s IP address. PE’s are created when the first VVol datastore is created on the
SVM using the specific protocol. The VP automatically creates export policy rules.
7 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
I/O to a VVol is through a specific PE. VVol LUNs are bound to the PE through a binding call managed
by the VP. The VP determines which PE is on the same node as the FlexVol containing the VVol and
binds the VVol to that PE. VVols are bound to a PE on access from an ESXi server. The most common
form of access is powering on the VM. The following command shows the binding relationship between a
VVol and the PE LUN through which ESXi accesses the VVol.
eadrax::*> lun bind show -instance
Vserver: xaxis
PE MSID: 2147484885
PE Vdisk ID: 800004d5000000000000000000000063036ed591
VVol MSID: 2147484951
VVol Vdisk ID: 800005170000000000000000000000601849f224
Protocol Endpoint: /vol/ds3/vvolPE-1410312812730
PE UUID: d75eb255-2d20-4026-81e8-39e4ace3cbdb
PE Node: eadrax-03
VVol: /vol/vvol31/naa.600a098044314f6c332443726e6e4534.vmdk
VVol Node: eadrax-03
VVol UUID: 22a5d22a-a2bd-4239-a447-cb506936ccd0
Secondary LUN: d2378d000000
Optimal binding: true
Reference Count: 2
Figure 3) vCenter, VSC, VASA VP, ESXi servers and clustered Data ONTAP.
Note: While all components of the VVols beta can be virtualized, NetApp recommends that any of the
components in Figure 3 that are virtualized run on a separate, stable vSphere infrastructure, not
on the ESXi 6.0 beta servers.
8 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
The VP is deployed as an OVA virtual appliance and is managed by Virtual Storage Console plugged in
to the vSphere Web Client. The administrator manages VASA and VVols using the Web Client.
VMs running on VVols require that the VP is running in order to power on VMs because the swap VVol is
created when the VM is powered on. This also means that the VP itself should not be running on VVols
since it would be its own dependency.
1.4 Limitations
Per VMware, VVols do not support NFS v4.
The X2 release candidate build of VP and VSC does not support ESXi servers that are members of
vSphere distributed switches (vDS), even if the vDS is not used for storage traffic. Such hosts will display
as incompatible or in maintenance mode.
2 Initial Setup
This section describes tasks common to the VVols on NetApp solution that are common to all storage
protocols.
Hardware Requirements
Table 3 lists the hardware components required to implement the use case.
Hardware Quantity
Servers that support vSphere 6.0 beta 1 minimum, 2 preferred
NetApp cluster that supports clustered Data ONTAP 1 cluster, with 1 node minimum
8.2.1 or higher
OR
NetApp Vsim running clustered Data ONTAP 8.2.1 or
higher
Software Requirements
Table 4 lists the software components required to test Vvols on NetApp Clustered Data ONTAP. Note
that these instructions may apply to later versions, but some steps will change. For example, some
manual steps may become part of a wizard or workflow.
9 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
Table 4) Testing VVols on Clustered Data ONTAP software requirements.
Configure and test DNS entries for vCSA, VASA VP, VSC Windows Server
Specific steps for configuring DNS entries depend on your infrastructure, however, the following entities
must have properly working DNS entries, and the hostnames used when deploying them must match
DNS.
vCenter Server 6.0 (vCSA or Windows based)
NetApp VASA Vendor Provider appliance
Windows Server running VSC
The following should have DNS entries, but if not, VVols may still work:
ESXi servers
cDOT cluster management and node management LIFs
SVM (vserver) management LIFs
SVM data LIFs do not need DNS entries, especially if they are on a separate, private storage network.
10 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
Table 5 provides a suggested list of entities and their DNS and IP addresses. Not all are needed,
depending on the number of nodes in you cDOT cluster and which protocols you are using. NFS and
iSCSI require separate LIFs with their own IP addresses since cDOT allows NFS LIFs to move and
failover, but not iSCSI LIFs.
vCenter Server
NetApp VASA VP
VSC Server
ESXi 1
ESXi 2
cDOT node 1
cDOT node 2
cDOT node 3
cDOT node 4
SVM 1 mgmt
11 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
node 4)
3. Enter the hostname to be used for vCSA 6.0. The following console block shows an unsuccessful
and a successful lookup:
> p1vcsa60b2
Server: [172.16.24.31]
Address: 172.16.24.31
Name: p1vcsa60beta1.vgibu.eng.netapp.com
Address: 172.16.24.35
4. To check reverse lookup, enter the IP address to be used for vCSA 6.0.
> 172.16.24.35
Server: [172.16.24.31]
Address: 172.16.24.31
Name: p1vcsa60beta1.vgibu.eng.netapp.com
Address: 172.16.24.35
5. Repeat these steps for the VASA VP, VSC server, and other entities.
Time services
It is recommended to use a common time server for all servers (including the ESXi 6 test servers and the
servers hosting vCSA, VSC and the VP), storage and VMs in the environment. Ensure each device or
VM is configured to use the time server.
12 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
2. Download from <tbd>
3. Extract files into a directory under a datastore. You don't need to rename any files or VMX entries,
even if you want to rename the VM.
cd /path/to/datastore/vm_directory
tar xvzf vsim4vvolsxxx.tgz
4. Browse the datastore and register the VM. You should rename the VM.
5. Edit VM settings and verify the virtual NICs are on the correct virtual networks (portgroups). The first
two (which will become e0a and e0b) should be on “Cluster Network”. The third (e0c) should be on a
routable network such as “VM Network”. The fourth (e0d) should be on your storage network, which
may be physically separate or a VLAN, or in smaller labs could be the same network as one or all of
the first three.
6. Power on the Vsim VM.
7. Open the VM console.
8. When you see the “Press Ctrl-c for Boot Menu” prompt, press ctrl-c.
Note: This used to say “Press ctrl-c for special boots”. Since the prompt has changed, it is no
longer required to wear special boots when accessing this menu. OK, seriously, if you miss
the ctrl-c menu, the boot process will likely panic and come back to it after a minute anyway.
9. Enter 4 to select Option 4 which will assign 3 disks to the Vsim and zero them out. Answer y to the
confirmation questions. The Vsim will reboot and zero the disks which takes a few minutes.
Note: If the Vsim sits idle after displaying a number of status messages, it is likely that the first
confirmation question was buried in the messages and scrolled off the screen. Press y and
enter and it should ask the second confirmation question.
10. The 8.2.x Vsim boots into the cluster setup wizard.
11. If this is the first or only planned Vsim, enter create and press enter.
12. Enter yes for single node cluster.
13. Give the cluster a name.
14. Enter the cluster base license key.
15. Do not enter additional keys at this time. (Well, you can, but it’s easier to copy and paste once you
have SSH working.)
13 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
16. Enter a password for cluster admin (twice).
17. Cluster management should be on e0c.
18. Enter the IP address for cDOT cluster management from Table 5, and the netmask and default
gateway.
19. Enter DNS domain(s) and DNS server(s) separated by commas.
20. Location is optional.
21. Node management interface port should be the same as cluster management (e0c).
22. Enter the IP address for node management from Table 5, and the netmask and default gateway.
23. Backup location is optional.
24. You can now log in via SSH to complete the rest. SSH to admin@<cluster_mgmt_ip>
25. Add additional licenses. Ensure you add at least one protocol license (NFS or iSCSI) and FlexClone.
cluster::> license add -license-code <code>
NFS <tbd>
iSCSI <tbd>
SnapMirror <tbd>
FlexClone <tbd>
14 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
Note: RAID type raid4 should only be used with Vsims. Best practice for physical clusters is
raid_dp.
29. If you get a warning “Warning: Creation of aggregate "aggr1" has been initiated.
11 disks need to be zeroed before they can be added to the aggregate. The
process has been initiated. Once zeroing completes on these disks, all
disks will be added at once. Note that if the system reboots before the
disk zeroing is complete, the aggregate will not exist.” Wait for the disks to zero
and the aggregate to come on line. Check status with aggr show.
p1vsim2::> aggr show
Aggregate Size Available Used% State #Vols Nodes RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr0 7.18GB 343.9MB 95% online 1 p1vsim2-01 raid_dp,
normal
aggr1 64.59GB 64.59GB 0% online 0 p1vsim2-01 raid_dp,
normal
2 entries were displayed.
CLI
1. The fastest way to create a SVM (Vserver) is using the vserver setup wizard.
2. From the CLI via SSH, enter vserver setup.
3. Give the SVM a meaningful name.
4. Enter the protocols you wish to use, separated by commas.
Note: Ensure you have previously entered the necessary licenses for each protocol you wish to use
before creating a SVM with those protocols.
5. The remaining wizard Step 1 SVM questions are straightforward and for the most part you can accept
the defaults. For vvols testing, you usually won’t need any of the Vserver client services (ldap, nis,
dns).
6. In wizard step 2 you can create a set of volumes to be used for vvol datastores. These steps are also
straightforward. You need at least one volume. You can create multiple volumes and change
settings on the volumes such as deduplication and compression in order to have different capabilities
that can be exposed by the VP.
7. In wizard step 3 you can create logical interfaces for the protocols you have configured. You should
create a LIF per node per fabric or network. Select the correct protocol for each LIF (fc, iscsi or nfs).
For single-node Vsims, you should usually use e0d for data LIFS to connect to ESXi. You can leave
15 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
the gateway blank if ESXi server storage network VMkernel ports and cDOT data LIFs are on the
same subnet.
8. In wizard step 4 you configure protocols.
For NFS, no questions are asked.
For FCP and iSCSI, the wizard prompts you to create an initiator group and LUN, which are not
necessary or used for vvols. At least one initiator is required for the igroup but you can enter the
example “iqn.1995-08.com.example:string” for iSCSI or a fake WWPN such as
“20:00:01:02:11:11:11:11” for FCP. Create the LUN very small (20m) since it won’t be used. You
can either delete these objects or ignore them. If you do not complete this part of the wizard, the
FCP or iSCSI service will not be created, but you can create it by hand using the iscsi create
or fcp create command.
9. After the SVM is created, disable NDMP node scope and enable NDMP for the SVM.
cluster::> system services ndmp node-scope-mode off
NDMP node-scope-mode is disabled.
Note: Disabling NDMP node scope is a one-time task per cluster. Ensure you include other
allowed protocols in the vserver modify allowed protocols list.
NFS
To prepare for VVols over NFS, ensure you have a data LIF per node for the SVM supporting VVols.
These data LIFs must be homed on a port on the ESXi to storage network. If you are using the Vsim in a
single node configuration, this means one data LIF configured for NFS. iSCSI and NFS LIFs must be
separate, but can coexist on the same network and physical port, ifgrp or VLAN.
If you did not create an NFS service on your SVM when you created the SVM, create an NFS service as
follows:
1. Create the NFS service for your SVM
cluster::> nfs create -vserver <svm_name> -access true -v3 enabled -tcp enabled -vstorage enabled
You do not need to manage export policies as the VP will do this automatically.
iSCSI
To prepare for VVols over iSCSI, ensure you have a data LIF per node for the SVM supporting VVols.
These data LIFs must be homed on a port on the ESXi to storage network. If you are using the Vsim in a
single node configuration, this means one data LIF configured for iSCSI. iSCSI and NFS LIFs must be
separate, but can coexist on the same network and physical port, ifgrp or VLAN.
Create and configure ESXi iSCSI vmkernel ports on the same network, VLAN and subnet as the SVM
target LIFs.
Create empty volumes, but do not create any LUNs or igroups.
16 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
FC
Note that FC is not supported with Vsim as it is not possible to have hardware HBAs with a virtual storage
appliance. To prepare for VVols over FC, ensure you have a data LIF per node per fabric for the SVM
supporting VVols. For example, if you have 2 nodes and typical dual redundant SAN fabrics, you should
have 4 FC target LIFs in the SVM each home on a different physical FC target port.
Remember all zoning rules and practices apply, and that you should use soft zoning specifying the target
WWPN of the SVM LIFs, not the physical ports of the nodes.
Create empty volumes, but do not create any LUNs or igroups.
17 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
Install and Configure Virtual Storage Console
For more information, please see the VSC 5.0 documentation at
http://mysupport.netapp.com/documentation/docweb/index.html?productID=61789&language=en-US.
1. In the Windows server for VSC, download and run 5.0.1X7-win64.exe.
Note: Version of the file may change as new versions of VSC and VP are released to match new
vSphere betas or RC.
2. Click Next, check “I understand …”, then Next again.
3. Leave Backup and Recovery unchecked and click Next, Next, Install.
4. If you get a dialog stating that “The NetApp vSphere Plugin Framework has not yet started.”,
complete the following on the VSC Windows server:
a. Open a command prompt
b. CD to the bin directory in the VSC installation directory, usually C:\Program Files\NetApp\Virtual
Storage Console\bin.
c. Run the following command:
C:\Program Files\NetApp\Virtual Storage Console\bin> vsc.bat ssl setup -generate-passwords
d. Start the Virtual Storage Console for VMware vSphere Server service (also known as NetApp
vSphere Plugin Framework or NVPF) in Windows Services.
5. If it didn’t happen automatically, open a browser to https://localhost:8143/Register.html.
6. In the plugin service information box, select either the FQDN or the IP address of the VSC server.
7. Fill in the vCenter Server information. Use “[email protected]” for User name.
8. Click Register.
9. In your browser where you were logged in to vSphere 6.0 Web Client, log out and back in again.
18 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
10. Virtual Storage Console icons and menus should be available in vCenter. VASA VP menus will not
yet be present.
19 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
13. To install VMware Tools in the VP VM, in the Inventory Trees, click the VM, then in the yellow
warning box, click Install VMware Tools.
14. In the VP console, press enter to continue VMware Tools installation.
15. When the installer prompts you, edit the VM settings and ensure the CD-ROM is disconnected and
set to Client Device.
16. In the VP console, press enter to reboot.
17. Set passwords for maint and vpserver. Best practice is to use different passwords for these
accounts.
18. Wait for the VP to start all processes and Application Status to display “vpserver is running and
waiting for vSphere registration”.
19. In the vSphere 6.0 beta Web Client, click the Home icon, then Virtual Storage Console
Configuration Register/Unregister VASA Vendor Provider.
20. Enter the IP Address or hostname of the VP and enter the vpserver password, then click Register.
20 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
23. If you get internal error 1063, log out, wait 5-7 minutes, and log back in.
21 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
2. Click the yellow scroll with green checkmark icon.
22 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
4. Repeat step 3 for each ESXi host or cluster.
5. Click Close.
23 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
Any means the capability is optional in this profile.
5. Review the summary of capability selections on the last screen then click Finish.
4 VVol datastores
Note: Do not use the built-in vCenter New Datastore wizard to provision NetApp VVol datastores. The
NetApp workflow performs all necessary storage-side setup including export policies, initiator
groups, LUN mapping, etc. While it is possible to perform these steps manually then use the
vCenter New Datastore wizard, it is much more error prone.
24 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
d. Fill in the details for a new FlexVol.
Note: Only aggregates that the SVM is permitted to use (listed in the SVM aggr-list parameter) will
show up in the list.
e. Click OK
f. Repeat steps c through e to create additional volumes as part of this VVol datastore.
8. Select the default SCP then click Next.
9. Review the settings for the VVol datastore then click Finish.
Note: No new FlexVols are created until after you click finish.
25 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
5 Managing VMs
6 Advanced Features
6.1 Deduplication
In order to use an SCP or VM Storage Policy with deduplication, the feature must be enabled on one or
more FlexVols. This can be achieved using System Manager or with the following command:
cluster::> volume efficiency on -vserver <svm> -volume <volume>
For more information on deduplication see the Logical Storage Management Guide
https://library.netapp.com/ecm/ecm_download_file/ECMP1368017 or Data Compression and
26 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
Deduplication Deployment and Implementation Guide for Clustered Data ONTAP:
http://media.netapp.com/documents/tr-3966.pdf.
6.2 Compression
In order to use an SCP or VM Storage Policy with compression, the feature must be enabled on one or
more FlexVols. This can be achieved using System Manager or with the following command:
cluster::> volume efficiency modify -vserver <svm> -volume <volume> -compression true
For more information on compression see the Logical Storage Management Guide
https://library.netapp.com/ecm/ecm_download_file/ECMP1368017 or Data Compression and
Deduplication Deployment and Implementation Guide for Clustered Data ONTAP:
http://media.netapp.com/documents/tr-3966.pdf.
7 Troubleshooting
27 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
This error can be caused by
Hostname or FQDN and/or IP address provided in the deployment wizard not matching on both
configuration sections or matching the entry on the DNS servers.
SSO Password not configured in the deployment wizard
No visible vCenter Servers and/or the message “To get started, either install a vCenter
Server system, or obtain access permission to an existing vCenter Server system.”
Most likely caused by logging in as root or other user which as of beta 2 no longer has the administrators
role or group membership by default. Either log in as [email protected] for administrative
access, or add the other user to the Administrators group in the vCSA server.
After a session went idle, Web Client gives “Connection Error: vSphere Web Client
session is no longer authenticated. Click OK to attempt a new login.” in a loop.
Normally after an idle session, you will simply be prompted to log in again. In some cases, you may need
to close the browser (all tabs and windows) and reopen. In extreme cases, you may need to restart the
vsphere-client service on the vCSA CLI.
p1vcsa60beta1:~ # service vsphere-client status
VMware vSphere Web Client is running: PID:32293, Wrapper:STARTED, Java:STARTED
p1vcsa60beta1:~ # service vsphere-client restart
Stopping VMware vSphere Web Client...
Waiting for VMware vSphere Web Client to exit...
Stopped VMware vSphere Web Client.
Starting VMware vSphere Web Client...
Waiting for VMware vSphere Web Client......
running: PID:10386
p1vcsa60beta1:~ # service vsphere-client status
VMware vSphere Web Client is running: PID:10386, Wrapper:STARTED, Java:STARTED
28 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
1. Log in to the Windows VM running VSC.
2. Open a command prompt
3. CD to the bin directory in the VSC installation directory, usually C:\Program Files\NetApp\Virtual
Storage Console\bin.
4. Run the following command:
C:\Program Files\NetApp\Virtual Storage Console\bin> vsc.bat ssl setup -generate-passwords
Some datastores have a red exclamation icon and an alert or issue stating “vSphere HA
failed to create a configuration vVol for this datastore and so will not be able to protect
virtual machines on the datastore until the problem is resolved. Error:
(vim.fault.InaccessibleDatastore)”
If there was an issue with VVol datastores during initial setup, they may not have been accessible to HA
for it to create a VVol in which to keep HA information. This error can be cleared by disabling HA then re-
enabling it.
NetApp VP does not show in list of VPs when creating VM Storage Policy
This should only happen when first setting up the environment, the first time the NetApp VP is registered.
Restart the vmware-sps service in vCenter.
p1vcsa60beta1:~ # service vmware-sps restart
Stopping VMware vSphere Profile-Driven Storage Service...
Stopped VMware vSphere Profile-Driven Storage Service.
29 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
Starting VMware vSphere Profile-Driven Storage Service...
wrapper | An encoding declaration is missing from the top of configuration file,
/usr/lib/vmware-vpx/sps/wrapper/conf/wrapper.conf, trying the system encoding.
wrapper | An encoding declaration is missing from the top of configuration file,
/usr/lib/vmware-vpx/sps/wrapper/conf/wrapper_java_additional.conf, trying the system encoding.
wrapper | Spawning intermediate process...
Waiting for VMware vSphere Profile-Driven Storage Service.......
running: PID:10624
How to unregister VASA VP when VSC thinks it is registered but menus don’t show up
1. Open the vSphere Managed Object Browser by pointing your web browser to
https://your_vc_server/mob/?moid=ExtensionManager.
2. Click (more…) to see all registered extensions.
3. Look for
extensionList["com.netapp.vasa.webclient"] Extension
extensionList["com.netapp.vasa.vvol.webclient"] Extension
4. Click UnregisterExtension
5. Enter com.netapp.vasa.vvol.webclient
6. Click Invoke Method.
7. Repeat steps 4 through 6 for the com.netapp.vasa.webclient extension.
Note: Later versions of the VP only have one extension.
8. You can now reregister the VP using VSC Configuration Register/Unregister VASA Vendor
Provider.
9. On the vCenter server, restart vmware-sps and vsphere-client services.
10. Log out of vSphere Web Client and log back in again.
First, test without using SSL by clicking the Options>> button on the Add Storage System dialog then
unchecking Use SSL.
Usually cDOT SSL is configured as necessary automatically during setup. However, to verify if any
settings have changed, refer to the “Managing SSL” section of the Clustered Data ONTAP® 8.2 System
Administration Guide for Cluster Administrators.
This error will also be seen when there is an IP address conflict with the cluster management LIF.
30 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
New SVM or volume not showing up in VSC menus
New storage objects need to be discovered. Go to VSC Storage Systems and click the blue storage
icon to Update All.
8 FAQ
Do I have to create a SVM (Vserver)?
Yes. Creating an SVM is not automated as part of the VVols workflows.
31 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
References
This report references the following documents and resources:
Virtual Volumes Technical FAQ https://communities.vmware.com/docs/DOC-27367 (Requires
vSphere 6 beta2 access)
NetApp® Readme for Beta Release VASA Provider for clustered Data ONTAP
http://mysupport.netapp.com/NOW/download/software/beta/beta_vasa_cdot/6.0X2/readme_VP-
RC.pdf fs
Using VMware VVOLs with Clustered Data ONTAP Storage
http://mysupport.netapp.com/NOW/download/software/beta/beta_vasa_cdot/6.0X2/VVOLS_QuickSta
rt_RC.pdf
http://mysupport.netapp.com/NOW/download/software/beta/beta_vasa_cdot/6.0X2/
Simulate ONTAP 8.2 Installation and Setup Guide
VSC 5.0 documentation
http://mysupport.netapp.com/documentation/docweb/index.html?productID=61789&language=en-US
Version History
Version Date Document Version History
Version 1.0 Sep 2014 Initial version
Version 1.04 Nov 2014 Updated for vSphere 6.0 RC, NetApp VASA VP 6.0X2 and VSC
5.0.1X7.
1.06 Mar 2015 Added notes on updated Vsim setup, including 8.2 and 8.3.
32 <Insert Technical Report Title Here> <Insert Document Classification Label Here>
Refer to the Intteroperability Matrix Tool (IMT) on
n the NetApp Su upport site to validate that the exa
act product
and feature verrsions described in this documen nt are supported ffor your specific environment. Th he NetApp
IMT defines the e product compo onents and versio ons that can be u
used to constructt configurations tthat are
supported by NetApp.
N Specific results depend o on each custome er's installation in
n accordance with published
specifications.
NetApp provide es no representa ations or warrantiies regarding thee accuracy, reliabbility, or servicea
ability of any
information or recommendation
r ns provided in thi s publication, or with respect to a
any results that mmay be
obtained by the e use of the inforrmation or observvance of any reccommendations p provided herein. The
information in this
t document is distributed AS ISS, and the use off this informationn or the implemen ntation of
any recommen ndations or techniques herein is a customer’s resp ponsibility and de
epends on the cu ustomer’s
ability to evalua
ate and integratee them into the cuustomer’s operattional environme ent. This docume ent and
the informationn contained herein may be used ssolely in connecttion with the NetA App products disscussed
in this documen nt.