0% found this document useful (0 votes)
197 views

BR Course Info

Br Course Info

Uploaded by

karthick
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
197 views

BR Course Info

Br Course Info

Uploaded by

karthick
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 87

BR Course

- Chapter 4
Node Virtual Server:
Admin - VServer: Interact things that impact the entire Cluster
Cluster Vserver:
page 25.
Cluster-Mode can support 64-bit aggr
pg 27.
when you create a flexvol part of the process is to Juction to the vserver
pg 35.
When you create the flexvol there's a command to create the junction at that tim
e.
Also you can create subdirectories within the flexvol and the junction entity

- Chapter 5
N-Blade takes request from clients...
CSM - Traffic cop (Caches the VLDB) so has listing of all Volumes
D-Blade - Data handler
page 12
Kernel modules tightly associated w/ bootup, if fail then system will likely cra
sh
vol0 in all hosts have the following RDB units: 1) MGT (Management), VLDB (volum
e Locator DB), VIF Mgr
- don't need to backup as they are replicated to each node in the cluster; If cr
ash, can easily restore.
page 24
Operations are transactional ==> All or nothing, every part will succeed, if not
then failure (revert/rollback)
page 26
There will always be a master, one node will always be a tie-breaker node (epsil
on)
page 30
Changes (writes) are frozen until Master(s) are elected.

Chapter 6
- Routing Groups will listen to RIP updates, no need to create static routes
- If one does have them, beginning in 8.0 it understands Cicsco CDP
DNS Load Balancing
kb 323380 (Windows 2003)
Chapter 7
page 11
All AD Forest are Kerberos Realms - Vservers will function properly w/ CIFS
page 13
Export Policy - Must customize the export policy, default is no volume(s) export
ed.

pg 19
Old 7G: cifs setup cmd
8.0: Kerberos enabled - (AD) or disabled - NT4 ==> NT Lan Man
pg 21
Best to precreate Computer Account in AD, prior to cifs creation

pg 22
Old /etc/usermap.cfg file needed to be created to go between Windows (ACL) and U
nix System Security Styles.
Page 23
#1 Have to manually name your Windows CIFS share (note vserver cifs create #1) "
MYCIFS"
#3 Assure that CIFS share that is mapped needs to go the Read-Write (Admin Repos
itory (Data-Lif) not the Read-Only Mirror
with / .admin
#4 Mapping a Drive letter to the User's home directory; Substitutes the %u
page 26 & 27
Need to have (2) rules for bi-directional Windows-to-Unix and Unix-to-Windows, 1

rule each
Chapter 8
page 2
DATA Restore
BR can participate in NDMP as (destination or source), but must initiate NDMP f
rom 7G server
NDMPcopy is not cluster aware
volume snap restore ==> same as volume snap promote
page 4
asynchronous only
either data protection, load sharing ==> no cascade
No single file restore, must promote (restore) the entire volume
To get a single file, go to the .snapshot directory on a client and move (copy)
over the file
page 11
Create volume (no restriction needed, unlike 7G) regarding baseline
If source goes down can promote a mirror
snap mirror resync equivalent
page 13
Load-Sharing (LS) Mirroring balance the Reads
.admin path special
page 20
From client perspective when mounting using a .admin mount, you're using a ReadWrite share
page 25
count ==> Retension = Number of copies kept.
page 26
DAR
in tape backup restore would normally need to read the index in order to locate
a file, but with Direct Access Recovery Capabilities
Don't need this
page 40

Chapter 9 Basic Troubleshooting


Best to create a panic so that core is dumped (NMI interrupt button on system, v
ia paperclip push is best)
page 10
! = "not"
Note: Giveback issues
- If a CIFS sessions (stateful) is running on the takeover node, you may have is
sues w/ giveback.
page 21
Assume that the D-blade(node-base entity) has better info than the cluster-wide
VLDB.
page 29
rdp_dump: can watch the election of new masters if a rebooting node was a master
.
If looking at cluster show & rdb_dump you determine one of the (3) RDB units: MG
T, VLDB or VifMgr,
is bad among the cluster as possible fix if corrupted is to delete the files & r
eproduce using the
copies on another node.
Chapter 10 Performance
page 11
For NAS operation is best if the client OS caching, but the Application itself c
aches use it instead
page 20
DB example - Transactional log writes (sequential) but writes to the DB is (rand
om)
page 33
Use perfstat8 instead of perfstat

================================================================================
=====================
login: admin
Password:
(system setup)
Welcome to the setup configuration wizard.
If at any time you want to have a question clarified, type "help".

You can abort the setup configuration at any time by typing "exit".
Any changes made before typing "exit" will be saved.
To skip a question (or accept the listed default), don't enter a value
Please enter the node hostname []: node7
Where is the node located? []: NetappU
Do you want to configure a management interface? [yes]:
Which port do you want to use? [e1a]:
Please enter the IP Address for this interface: 10.254.144.17
Please enter the Netmask for this interface: 255.255.252.0
INFO: Your interface was created successfully; the value of -use-failover-group
was defaulted to 'enabled' for this non-data lif.
Your interface was created successfully; the routing group
n10.254.144.0/22 was created
Please enter the IP Address of the default gateway: 10.254.144.1
Do you want to run DNS resolver? [no]: yes
Please enter DNS domain name(s): nau01.netappu.com
Please enter the IP address for the first nameserver: 216.240.23.25
Please enter the IP address for the second nameserver: 10.254.132.10
This system will send event messages and weekly reports to NetApp Technical
Support. To disable this feature, enter "autosupport modify -support disable"
within 24 hours. Enabling AutoSupport can significantly speed problem
determination and resolution should a problem occur on your system. For further
information on AutoSupport, please see: http://now.netapp.com/autosupport/
Press the return key to continue:
Do you want to enable remote management? [yes]: no
The node has now been configured.
login: root
Password:
WARNING: The system shell provides access to low-level
diagnostic tools that can cause irreparable damage to
the system if not used properly. Use this environment
only when directed to do so by support personnel.
This account is currently not available.
login: admin
Password:
node7::> stor aggr show
(storage aggregate show)
Aggregate
Size Available Used% State

#Vols Nodes

RAID Status

--------- -------- --------- ----- ------- ------ ---------------- -----------WARNING: Only local entries can be displayed at this time.
aggr0
56.76GB
2.59GB 95% online
1 node7

raid_dp

node7::> net port show


(network port show)
Node Port Role
------ ------ -----------node7
e0a
cluster
e0b
cluster
e0c
data
e0d
data
e1a
node-mgmt
e1b
data
6 entries were displayed.

Auto-Negot Duplex
Speed (Mbps)
Link MTU Admin/Oper Admin/Oper Admin/Oper
---- ----- ----------- ---------- -----------up
up
up
up
up
down

1500
1500
1500
1500
1500
1500

true/true
true/true
true/true
true/true
true/true
true/true

full/full
full/full
full/full
full/full
full/full
full/half

auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/10

node7::> cluster create -license NWUZZJPJYBFDAA -clustername cluster_node78 -mgm


t-port e0d -mgmt-ip 10.254.144.17 -mgmt-netmask 255.255.252.0 -mgmt-gateway 10.2
54.144.1 -ipaddr1 192.168.150.25 ipadd
ERROR: 'ipadd' was not expected. Please specify -fieldname first.
node7::> cluster create -license NWUZZJPJYBFDAA -clustername cluster_node78 -mgm
t-port e0d -mgmt-ip 10.254.144.17 -mgmt-netmask 255.255.252.0 -mgmt-gateway 10.2
54.144.1 -ipaddr1 192.168.150.25 -ipaddr2 192.168.150.26 -netmask 255.255.255.0
-mtu 9000
ERROR: command failed: Logical interface mgmt1 with IP address 10.254.144.17 is
already in use on this node and cannot be used as a cluster interface.
node7::> cluster create -license NWUZZJPJYBFDAA -clustername cluster_node78 -mgm
t-port e1a -mgmt-ip 10.254.144.17 -mgmt-netmask 255.255.252.0 -mgmt-gateway 10.2
54.144.1 -ipaddr1 192.168.150.25 -ipaddr2 192.168.150.26 -netmask 255.255.255.0
-mtu 9000
ERROR: command failed: Logical interface mgmt1 with IP address 10.254.144.17 is
already in use on this node and cannot be used as a cluster interface.
node7::> cluster create -license NWUZZJPJYBFDAA -clustername cluster_node78 -mgm
t-port e1a -mgmt-ip 10.254.144.17 -mgmt-netmask 255.255.252.0 -mgmt-gateway 10.2
54.144.1 -ipaddr1 192.168.150.25 -ipaddr2 192.168.150.26 -netmask 255.255.255.0
-mtu 9000
ERROR: command failed: Logical interface mgmt1 with IP address 10.254.144.17 is
already in use on this node and cannot be used as a cluster interface.
node7::> cluster create -license NWUZZJPJYBFDAA -clustername cluster_node78 -mgm
t-port e0d -mgmt-ip 10.254.144.37 -mgmt-netmask 255.255.252.0 -mgmt-gateway 10.2
54.144.1 -ipaddr1 192.168.150.25 -ipaddr2 192.168.150.26 -netmask 255.255.255.0
-mtu 9000
Cluster create waiting: Starting create cluster process
Cluster create waiting: Creating LIF for IP Address #1
Cluster create waiting: Creating LIF for IP Address #1
Cluster create waiting: Creating LIF for IP Address #1
Cluster create waiting: Creating LIF for IP Address #2
Cluster create waiting: Creating LIF for IP Address #2
Cluster create waiting: Re-labeling local logical Interfaces
Cluster create waiting: Re-labeling local logical Interfaces

Cluster create waiting: Re-labeling local logical Interfaces


Cluster create waiting: Inserting local LIF ids into management ring
Cluster create waiting: Inserting local LIF ids into management ring
Cluster create waiting: Waiting for other local Replication Database (RDB) to be
come ready (VIF Manager & Volume Location Database)
Cluster create waiting: Configuring cluster management interface
Cluster create waiting: The cluster management LIF is configured.
Cluster create succeeded: Cluster has been created.
NOTICE: Cluster create: Successfully added aggregate aggr0 to the VLDB
node7::> cluster ?
ha>
identity>
modify
show
statistics>

Manage high-availability configuration


Manage the cluster's attributes, including name
and serial number
Modify cluster node membership attributes
Display cluster node members
Display cluster statistics

node7::> cluster
node7::cluster> cluster show
Node
Health
--------------------- ------node7
true
node8
true
2 entries were displayed.

Eligibility
-----------true
true

node7::cluster> stor aggr show


(storage aggregate show)
Aggregate
Size Available Used%
--------- -------- --------- ----aggr0
56.76GB
2.59GB 95%
aggr0_node8
56.76GB
2.59GB 95%
2 entries were displayed.
node7::cluster>
Feature
--------------Base

State
#Vols Nodes
RAID Status
------- ------ ---------------- -----------online
1 node7
raid_dp
online

1 node8

raid_dp

system license show


Cluster SN Limit
Description
----------- ------- ----------1-80-123456 666
Base License w/cluster size limit (nodes)

node7::cluster> licence add -lic FEDPEKPJYBFDAA


ERROR: "licence" is not a recognized command
node7::cluster> license add -lic FEDPEKPJYBFDAA
(system license add)
node7::cluster> lic add TJFAEKPJYBFDAA
(system license add)
node7::cluster> lic add JAMHCKPJYBFDAA
(system license add)
node7::cluster>
Feature
--------------Base

system lic show


Cluster SN Limit
Description
----------- ------- ----------1-80-123456 666
Base License w/cluster size limit (nodes)

CIFS
1-80-123456 666
NFS
1-80-123456 666
SnapMirror_DP
1-80-123456 666
4 entries were displayed.

CIFS License
NFS License
SnapMirror Data Protection License

node7::cluster> system date show


Node
Date
Timezone
--------- ------------------- ------------------------node7
11/30/2010 22:35:53 Etc/UTC
node8
11/30/2010 22:35:54 Etc/UTC
2 entries were displayed.
node7::cluster> system date modify -node * -timezone US/Pacific -date "11/30/10
14:40:00"
ERROR: command failed on entry "node7": Can not set the time on node node7:
Invalid argument
WARNING: Do you want to continue running this command? {y|n}: n
0 entries were modified.
node7::cluster> system date modify -node7 -timezone US/Pacific -date "11/30/10 1
4:40:00"
ERROR: invalid argument "-node7"
node7::cluster> system date modify -node node7 -timezone US/Pacific -date "11/30
/10 14:40:00"
ERROR: command failed: Can not set the time on node node7: Invalid argument
node7::cluster> system date modify -node node7 -timezone US/Pacific -date "11/30
/2010 14:40:00"
node7::cluster> system date modify -node * -timezone US/Pacific -date "11/30/201
0 14:45:00"
2 entries were modified.
node7::cluster> cluster show
Node
Health
--------------------- ------node7
true
node8
true
2 entries were displayed.
node7::cluster> ?
ha>
identity>
modify
show
statistics>

Eligibility
-----------true
true

Manage high-availability configuration


Manage the cluster's attributes, including name
and serial number
Modify cluster node membership attributes
Display cluster node members
Display cluster statistics

node7::cluster> storage
node7::storage> storage ?
aggregate>
disk>
failover>

Manage storage aggregates


Manage physical disks
Manage storage failover

node7::storage>
login: admin
Password:
node7::>
node7::> security login show
UserName
Application
--------------------- ----------admin
console
admininistrator
http
administrator
http
public
snmp
4 entries were displayed.

Authentication
Method
-------------password
password
password
community

Role Name
-------------------admin
admin
admin
readonly

Acct
Locked
-----no
no
no
-

node7::> >
ERROR: ">" is not a recognized command
node7::> ?
up
cluster>
dashboard>
event>
exit
history
job>
network>
redo
rows
run
security>
snapmirror>
statistics>
storage>
system>
top
volume>
vserver>
node7::> ?
up
cluster>
dashboard>
event>
exit
history
job>
network>
redo
rows
run
security>
snapmirror>
statistics>
storage>

Go up one directory
Manage clusters
Display dashboards
Manage system events
Quit the CLI session
Show the history of commands for this CLI session
Manage jobs and job schedules
Manage physical and virtual network connections
Execute a previous command
Show/Set the rows for this CLI session
Run interactive or non-interactive commands in
the node shell
The security directory
Manage SnapMirror
Display operational statistics
Manage physical storage, including disks,
aggregates, and failover
The system directory
Go to the top-level directory
Manage virtual storage, including volumes,
snapshots, and mirrors
Manage virtual servers
Go up one directory
Manage clusters
Display dashboards
Manage system events
Quit the CLI session
Show the history of commands for this CLI session
Manage jobs and job schedules
Manage physical and virtual network connections
Execute a previous command
Show/Set the rows for this CLI session
Run interactive or non-interactive commands in
the node shell
The security directory
Manage SnapMirror
Display operational statistics
Manage physical storage, including disks,
aggregates, and failover

system>
top
volume>
vserver>
node7::> ?
up
cluster>
dashboard>
event>
exit
history
job>
network>
redo
rows
run
security>
snapmirror>
statistics>
storage>
system>
top
volume>
vserver>

The system directory


Go to the top-level directory
Manage virtual storage, including volumes,
snapshots, and mirrors
Manage virtual servers
Go up one directory
Manage clusters
Display dashboards
Manage system events
Quit the CLI session
Show the history of commands for this CLI session
Manage jobs and job schedules
Manage physical and virtual network connections
Execute a previous command
Show/Set the rows for this CLI session
Run interactive or non-interactive commands in
the node shell
The security directory
Manage SnapMirror
Display operational statistics
Manage physical storage, including disks,
aggregates, and failover
The system directory
Go to the top-level directory
Manage virtual storage, including volumes,
snapshots, and mirrors
Manage virtual servers

node7::> cluster
node7::cluster> ?
ha>
identity>
modify
show
statistics>

Manage high-availability configuration


Manage the cluster's attributes, including name
and serial number
Modify cluster node membership attributes
Display cluster node members
Display cluster statistics

node7::cluster> statistics
node7::cluster statistics> ?
show
Display cluster-wide statistics
node7::cluster statistics> show
Counter
Value
Delta
------------- ------------- ------------CPU Busy:
5%
Operations:
Total:
0
NFS:
0
CIFS:
0
Data Network:
Busy:
0%
Received:
3.21MB
Sent:
10.7KB
Cluster Network:
Busy:
0%
Received:
91.8MB
-

Sent:
Storage Disk:
Read:
Write:

91.7MB

530MB
5.94GB

node7::cluster statistics> ..
node7::cluster> storage aggr show
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0
56.76GB
2.59GB 95% online
1 node7
raid_dp
aggr0_node8
56.76GB
2.59GB 95% online
1 node8
raid_dp
2 entries were displayed.
node7::cluster>
node7::cluster>
node7::cluster>
node7::cluster> storage aggr
node7::storage aggregate> ?
add-disks
create
delete
member>
modify
rename
scrub
show
show-scrub-status

Add disks to an aggregate


Create an aggregate
Delete an aggregate
The member directory
Modify aggregate attributes
Rename an aggregate
Aggregate parity scrubbing
Display a list of aggregates
Display aggregate scrubbing status

node7::storage aggregate> modify


[-aggregate] <aggregate name>
[ -state <aggregate state> ]
[ -raidtype|-t <raid type> ]
[ -maxraidsize|-s <integer> ]
[ -ha-policy {sfo|cfo} ]

?
Aggregate
State
Raid Type
Max RAID Size
HA Policy

node7::storage aggregate> scrub ?


[-aggregate] <aggregate name> Aggregate
[ -raidgroup <text> ]
RAID Group
[-action] {start|stop|resume|suspend|status}
Action
node7::storage aggregate> system node run -node node7 sysconfig -r
Aggregate aggr0 (online, raid_dp) (block checksums)
Plex /aggr0/plex0 (online, normal, active)
RAID group /aggr0/plex0/rg0 (normal)
RAID Disk
s (MB/blks)
------------------dparity
36/142410400
parity
36/142410400
data
36/142410400

Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks)

Phy

------ ------------- ---- ---- ---- ----- --------------

---

0a.112 0a

FC:A

- FCAL 10000 68000/139264000

695

0a.113 0a

FC:A

- FCAL 10000 68000/139264000

695

0a.114 0a

FC:A

- FCAL 10000 68000/139264000

695

Spare disks
RAID Disk
s (MB/blks)
------------------Spare disks for
spare
52/140395088
spare
36/142410400
spare
36/142410400
spare
36/142410400
spare
36/142410400
spare
36/142410400
spare
36/142410400
spare
36/142410400
spare
36/142410400
spare
36/142410400
spare
36/142410400

Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks)

Phy

------ ------------- ---- ---- ---- ----- --------------

---

block or zoned checksum traditional volumes or aggregates


0a.121 0a
7 9 FC:A - FCAL 10000 68000/139264000

685

0a.115 0a

FC:A

- FCAL 10000 68000/139264000

695

0a.116 0a

FC:A

- FCAL 10000 68000/139264000

695

0a.117 0a

FC:A

- FCAL 10000 68000/139264000

695

0a.118 0a

FC:A

- FCAL 10000 68000/139264000

695

0a.119 0a

FC:A

- FCAL 10000 68000/139264000

695

0a.120 0a

FC:A

- FCAL 10000 68000/139264000

695

0a.122 0a

10 FC:A

- FCAL 10000 68000/139264000

695

0a.123 0a

11 FC:A

- FCAL 10000 68000/139264000

695

0a.124 0a

12 FC:A

- FCAL 10000 68000/139264000

695

0a.125 0a

13 FC:A

- FCAL 10000 68000/139264000

695

node7::storage aggregate> modify -state ?


online
restricted
node7::storage aggregate> vol ?
(volume)
copy
Make a copy of a volume
create
Create a new volume
delete
Delete an existing volume
member>
Manage constituent volumes of striped volumes
modify
Modify volume attributes
mount
Mount a volume on another volume with a
junction-path
move
Move a volume from one aggregate to another
aggregate
qtree>
Manage qtrees
quota>
Manage Quotas, Policies, Rules and Reports
rename
Rename an existing volume
show
Display a list of volumes
snapshot>
Manage snapshots
unmount
Unmount a volume
node7::storage aggregate> set -prive
ERROR: invalid argument "-prive"
node7::storage aggregate> set -privilege diag
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y

node7::storage aggregate*> set -priv advanced


node7::storage aggregate*> vol ?
(volume)
add-members
*Add members to an existing striped volume
copy
Make a copy of a volume
create
Create a new volume
delete
Delete an existing volume
make-vsroot
*Designate a non-root volume as a root volume of
the vserver
member>
Manage constituent volumes of striped volumes
modify
Modify volume attributes
mount
Mount a volume on another volume with a
junction-path
move
Move a volume from one aggregate to another
aggregate
qtree>
Manage qtrees
quota>
Manage Quotas, Policies, Rules and Reports
rename
Rename an existing volume
show
Display a list of volumes
snapshot>
Manage snapshots
start-check
*Start testing volume for errors
stop-check
*Stop testing volume
unmount
Unmount a volume
node7::storage aggregate*> set admin
login: admin
Password:
node7::>
node7::> stor aggr show
(storage aggregate show)
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0
56.76GB
2.59GB 95% online
1 node7
raid_dp
aggr0_node8
56.76GB
2.59GB 95% online
1 node8
raid_dp
node8_george_aggr
170.3GB 170.3GB
0% online
0 node8
raid_dp
3 entries were displayed.
node7::> vol show
(volume show)
Virtual
Server
Volume
Aggregate
--------- ------------ -----------node7
vol0
aggr0
node8
vol0
aggr0_node8
2 entries were displayed.

State
---------online
online

Type
Size Available Used%
---- ---------- ---------- ----RW
53.87GB
41.92GB 22%
RW
53.87GB
41.92GB 22%

node7::> stor aggr create


(storage aggregate create)
Usage:
[-aggregate] <aggregate name>
Aggregate
{ [ -nodes <nodename>, ... ]
Nodes
[-diskcount] <integer>
Number Of Disks
| [-disklist|-d] <disk path name>, ... } Disks
[ -raidtype|-t <raid type> ]
Raid Type
[ -maxraidsize|-s <integer> ]
Max RAID Size

[ -volume-style {flex|striped} ]
[ -allow-mixed|-f [{true|false}] ]

Volume Style
Allow Disks With Different RPMs

node7::>

(Login timeout will occur in 60 seconds)

node7::>

(Login timeout will occur in 50 seconds)

node7::>

(Login timeout will occur in 40 seconds)

node7::>

(Login timeout will occur in 30 seconds)

node7::>

(Login timeout will occur in 20 seconds)

node7::>

(Login timeout will occur in 10 seconds)

node7::>
Exiting due to timeout
login:
login:
login: admin
Password:
node7::> stor aggr create ?
(storage aggregate create)
[-aggregate] <aggregate name>
Aggregate
{ [ -nodes <nodename>, ... ]
Nodes
[-diskcount] <integer>
Number Of Disks
| [-disklist|-d] <disk path name>, ... } Disks
[ -raidtype|-t <raid type> ]
Raid Type
[ -maxraidsize|-s <integer> ]
Max RAID Size
[ -volume-style {flex|striped} ]
Volume Style
[ -allow-mixed|-f [{true|false}] ]
Allow Disks With Different RPMs
node7::> stor aggr create
(storage aggregate create)
Usage:
[-aggregate] <aggregate name>
Aggregate
{ [ -nodes <nodename>, ... ]
Nodes
[-diskcount] <integer>
Number Of Disks
| [-disklist|-d] <disk path name>, ... } Disks
[ -raidtype|-t <raid type> ]
Raid Type
[ -maxraidsize|-s <integer> ]
Max RAID Size
[ -volume-style {flex|striped} ]
Volume Style
[ -allow-mixed|-f [{true|false}] ]
Allow Disks With Different RPMs
node7::> vserver show
Virtual
Root
Server
Type
Volume
----------- ------- ---------cluster_node78
admin node7
node
node8
node
3 entries were displayed.

Name
Name
NIS
LDAP
Aggregate Service Mapping Domain
Client
---------- ------- ------- ---------- ----------

node7::> store aggr -aggr node7_aggr1 -diskcount 3

ERROR: "store" is not a recognized command


node7::> stor aggr -aggr node7_aggr1 -diskcount 3

ERROR: "-aggr" is not a recognized command


node7::> stor aggr create -aggr node7_aggr1 -diskcount 3
(storage aggregate create)
[Job 6] Job succeeded: DONE
node7::> stor aggr show
(storage aggregate show)
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0
56.76GB
2.59GB 95% online
1 node7
raid_dp
aggr0_node8
56.76GB
2.59GB 95% online
1 node8
raid_dp
node7_aggr1
56.76GB 56.76GB
0% online
0 node7
raid_dp
node8_george_aggr
170.3GB 170.3GB
0% online
1 node8
raid_dp
4 entries were displayed.
node7::> stor aggr show -aggr node7_aggr1
(storage aggregate show)
Aggregate:
Size:
Used Size:
Used Percentage:
Available Size:
State:
Nodes:
Number Of Disks:
Disks:
Number Of Volumes:
Plexes:
RAID Groups:
Raid Type:
Max RAID Size:
RAID Status:
Checksum Enabled:
Checksum Status:
Checksum Style:
Inconsistent:
Volume Style:
HA Policy:

node7_aggr1
56.76GB
96KB
0%
56.76GB
online
node7
3
node7:0a.115, node7:0a.116, node7:0a.117
0
/node7_aggr1/plex0(online)
/node7_aggr1/plex0/rg0
raid_dp
16
raid_dp
true
active
block
false
flex
sfo

node7::> storage failover show


Takeover InterConn
Node
Partner
Enabled Possible Up
State
-------------- -------------- ------- -------- --------- -----------------node8
false false
false
waiting
node7::> system node run -node node7 license add KDQLGBN
A cf site license has been installed.
Controller Failover will be enabled upon reboot.
Make sure that each individual service is licensed
on both nodes or on neither node. Remember to configure
the network interfaces for the other node.
node7::> system node run -node node7 license
a_sis not licensed

cf
cf_remote
cifs
compression
disk_sanitization
fcp
flex_clone
flex_scale
flexcache_nfs
http
iscsi
multistore
nearstore_option
nfs
operations_manager
pamii
protection_manager
provisioning_manager
smdomino
smsql
snapdrive_unix
snapdrive_windows
snaplock
snaplock_enterprise
snapmanager_hyperv
snapmanager_oracle
snapmanager_sap
snapmanager_sharepoint
snapmanagerexchange
snapmirror
snapmirror_sync
snapmover
snaprestore
snapvalidator
sv_applications_pri
sv_exchange_pri
sv_linux_pri
sv_marketing_pri
sv_ontap_pri
sv_ontap_sec
sv_oracle_pri
sv_sharepoint_pri
sv_sql_pri
sv_unix_pri
sv_vi_pri
sv_vmware_pri
sv_windows_ofm_pri
sv_windows_pri
syncmirror_local
v-series
vld

site KDQLGBN
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed
not licensed

node7::> system node reboot -node node7


login: .
Dec 1 Uptime: 3d3h5m53s
System rebooting...
Phoenix TrustedCore(tm) Server

Copyright 1985-2004 Phoenix Technologies Ltd.


All Rights Reserved
BIOS version: 2.3.0
Portions Copyright (c) 2006-2009 NetApp All Rights Reserved
CPU= Dual Core AMD Opteron(tm) Processor 265 X 2
Testing RAM
512MB RAM tested
8192MB RAM installed
Fixed Disk 0: STEC
NACF1GM1U-B11

Boot Loader version 1.6.1


Copyright (C) 2000-2003 Broadcom Corporation.
Portions Copyright (C) 2002-2009 NetApp
CPU Type: Dual Core AMD Opteron(tm) Processor 265
Starting AUTOBOOT press Ctrl-C to abort...
Loading x86_64/freebsd/image2/kernel:....0x100000/3277608 0x520340/3198128 0x82c
ff0/562512 Entry at 0x801445e0
Loading x86_64/freebsd/image2/platform.ko:0x8b7000/147808 0x921d70/156600 0x8db1
60/456 0x948128/1200 0x8db328/616 0x9485d8/1848 0x8db590/15629 0x8df2a0/20870 0x
8e4428/80 0x948d10/240 0x8e4478/576 0x948e00/1728 0x8e46b8/304 0x9494c0/912 0x8e
47e8/48 0x949850/144 0x8e4820/48000 0x9498e0/56712 0x8f03a0/425 0x90ae70/3090 0x
921c81/237 0x90ba88/47400 0x9173b0/43217
Starting program at 0x801445e0
NetApp Data ONTAP Release 8.0RC1 Cluster-Mode
Copyright (C) 1992-2009 NetApp.
All rights reserved.
*******************************
*
*
* Press Ctrl-C for Boot Menu. *
*
*
*******************************
arp_rtrequest: bad gateway 127.0.20.1 (!AF_LINK)
BSD initialization for BSD <-> Ontap communication Done!
add host 127.0.20.1: gateway 127.0.20.1
add host 127.0.10.1: gateway 127.0.20.1
Doesn't use '/etc/syslog.conf', no syslogd
Skipping adding config files for console for 0
7 mode networking configuration change is disallowed while in 10 mode.
Vdisk Snap Table for host:0 is initialized
fcp_service: FCP is not licensed.
ONTAP EMS log disabled. User space <notifyd> processes EMS log file
mroot is now available
mroot is now available
filter sync'd
Wed Dec 1 18:25:52 GMT 2010
login: admin
Password:
node7::> stor failover show
(storage failover show)
Takeover InterConn
Node
Partner
Enabled Possible Up
State
-------------- -------------- ------- -------- --------- -----------------node7
node8
false false
true
connected

node8
node7
2 entries were displayed.

false

false

true

connected

node7::> stor fail show -instance


(storage failover show)
Node:
Partner Name:
Node NVRAM ID:
Partner NVRAM ID:
Service Enabled:
Takeover Possible:
Reason Takeover Not Possible:

node7
node8
118060269
118059989
false
false
Storage failover is disabled
Storage failover is disabled on the par

tner node
Interconnect Up:
State:
Time Until Takeover:
Reason Takeover Not Possible By Partner:
Auto Giveback Enabled:
Auto-Abort Operations Enabled:
Check Partner Enabled:
Takeover Detection Time (secs):
Takeover On Panic Enabled:
Takeover On Reboot Enabled:
Node:
Partner Name:
Node NVRAM ID:
Partner NVRAM ID:
Service Enabled:
Takeover Possible:
Reason Takeover Not Possible:

NVRAM log not synchronized


true
connected
Storage failover is disabled
NVRAM log not synchronized
false
true
true
15
true
true
node8
node7
118059989
118060269
false
false
Storage failover is disabled
Storage failover is disabled on the par

tner node
Interconnect Up:
State:
Time Until Takeover:
Reason Takeover Not Possible By Partner:
Auto Giveback Enabled:
Auto-Abort Operations Enabled:
Check Partner Enabled:
Takeover Detection Time (secs):
Takeover On Panic Enabled:
Takeover On Reboot Enabled:
2 entries were displayed.

NVRAM log not synchronized


true
connected
Storage failover is disabled
NVRAM log not synchronized
false
true
true
15
true
true

node7::> system node run -node node8 license


a_sis not licensed
cf site KDQLGBN
cf_remote not licensed
cifs not licensed
compression not licensed
disk_sanitization not licensed
fcp not licensed
flex_clone not licensed

flex_scale
flexcache_nfs
http
iscsi
multistore
nearstore_option
nfs
operations_manager
pamii
protection_manager
provisioning_manager
smdomino
smsql
snapdrive_unix
snapdrive_windows
snaplock
snaplock_enterprise
snapmanager_hyperv
snapmanager_oracle
snapmanager_sap
snapmanager_sharepoint
snapmanagerexchange
snapmirror
snapmirror_sync
snapmover
snaprestore
snapvalidator
sv_applications_pri
sv_exchange_pri
sv_linux_pri
sv_marketing_pri
sv_ontap_pri
sv_ontap_sec
sv_oracle_pri
sv_sharepoint_pri
sv_sql_pri
sv_unix_pri
sv_vi_pri
sv_vmware_pri
sv_windows_ofm_pri
sv_windows_pri
syncmirror_local
v-series
vld

not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not
not

licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed
licensed

node7::> store fail modify -node node7 -enable true -auto-giveback true
ERROR: "store" is not a recognized command
node7::> store fail modify -node node7 -enable true -auto-giveback true
ERROR: "store" is not a recognized command
node7::> stor fail modify -node node7 -enable true -auto-giveback true
(storage failover modify)
ERROR: command failed: Failed to enable storage failover service. Reason:
Service is already enabled.
node7::> stor fail show -instance

(storage failover show)


Node:
Partner Name:
Node NVRAM ID:
Partner NVRAM ID:
Service Enabled:
Takeover Possible:
Reason Takeover Not Possible:
Interconnect Up:
State:
Time Until Takeover:
Reason Takeover Not Possible By Partner:
Auto Giveback Enabled:
Auto-Abort Operations Enabled:
Check Partner Enabled:
Takeover Detection Time (secs):
Takeover On Panic Enabled:
Takeover On Reboot Enabled:

node7
node8
118060269
118059989
true
true
true
connected
false
true
true
15
true
true

Node:
Partner Name:
Node NVRAM ID:
Partner NVRAM ID:
Service Enabled:
Takeover Possible:
Reason Takeover Not Possible:
Interconnect Up:
State:
Time Until Takeover:
Reason Takeover Not Possible By Partner:
Auto Giveback Enabled:
Auto-Abort Operations Enabled:
Check Partner Enabled:
Takeover Detection Time (secs):
Takeover On Panic Enabled:
Takeover On Reboot Enabled:
2 entries were displayed.

node8
node7
118059989
118060269
true
true
true
connected
true
true
true
15
true
true

node7::> cluster ha show


High Availability Configured: false
node7::> cluster ha modify -configured true
WARNING: High Availability (HA) configuration for cluster services requires
that both SFO storage failover and SFO auto-giveback be enabled. These
actions will be performed if necessary.
Do you want to continue? {y|n}: y
NOTICE: modify_imp: HA is configured in management.
node7::> cluster ha show
High Availability Configured: true
node7::> stor fail show -instance
(storage failover show)
Node: node7
Partner Name: node8
Node NVRAM ID: 118060269

Partner NVRAM ID:


Service Enabled:
Takeover Possible:
Reason Takeover Not Possible:
Interconnect Up:
State:
Time Until Takeover:
Reason Takeover Not Possible By Partner:
Auto Giveback Enabled:
Auto-Abort Operations Enabled:
Check Partner Enabled:
Takeover Detection Time (secs):
Takeover On Panic Enabled:
Takeover On Reboot Enabled:

118059989
true
true
true
connected
true
true
true
15
true
true

Node:
Partner Name:
Node NVRAM ID:
Partner NVRAM ID:
Service Enabled:
Takeover Possible:
Reason Takeover Not Possible:
Interconnect Up:
State:
Time Until Takeover:
Reason Takeover Not Possible By Partner:
Auto Giveback Enabled:
Auto-Abort Operations Enabled:
Check Partner Enabled:
Takeover Detection Time (secs):
Takeover On Panic Enabled:
Takeover On Reboot Enabled:
2 entries were displayed.

node8
node7
118059989
118060269
true
true
true
connected
true
true
true
15
true
true

node7::> stor fail modify -node node7 -auto-giveback false


(storage failover modify)
Disabling auto-giveback under cluster HA configuration will prevent the
management cluster services from automatically going online under
alternating-failure scenarios. Do you want to disable auto-giveback?
{y|n}: y
node7::> stor fail takeover -bynode node8
(storage failover takeover)
WARNING: A takeover will be initiated. Once the partner node reboots, a giveback
will be automatically initiated. Do you want to continue?
{y|n}: y
node7::> .
Dec 1 Uptime: 11m56s
System rebooting...
Phoenix TrustedCore(tm) Server
Copyright 1985-2004 Phoenix Technologies Ltd.
All Rights Reserved
BIOS version: 2.3.0
Portions Copyright (c) 2006-2009 NetApp All Rights Reserved
CPU= Dual Core AMD Opteron(tm) Processor 265 X 2
Testing RAM

512MB RAM tested


8192MB RAM installed
Fixed Disk 0: STEC
NACF1GM1U-B11

Boot Loader version 1.6.1


Copyright (C) 2000-2003 Broadcom Corporation.
Portions Copyright (C) 2002-2009 NetApp
CPU Type: Dual Core AMD Opteron(tm) Processor 265
Starting AUTOBOOT press Ctrl-C to abort...
Loading x86_64/freebsd/image2/kernel:....0x100000/3277608 0x520340/3198128 0x82c
ff0/562512 Entry at 0x801445e0
Loading x86_64/freebsd/image2/platform.ko:0x8b7000/147808 0x921d70/156600 0x8db1
60/456 0x948128/1200 0x8db328/616 0x9485d8/1848 0x8db590/15629 0x8df2a0/20870 0x
8e4428/80 0x948d10/240 0x8e4478/576 0x948e00/1728 0x8e46b8/304 0x9494c0/912 0x8e
47e8/48 0x949850/144 0x8e4820/48000 0x9498e0/56712 0x8f03a0/425 0x90ae70/3090 0x
921c81/237 0x90ba88/47400 0x9173b0/43217
Starting program at 0x801445e0
NetApp Data ONTAP Release 8.0RC1 Cluster-Mode
Copyright (C) 1992-2009 NetApp.
All rights reserved.
*******************************
*
*
* Press Ctrl-C for Boot Menu. *
*
*
*******************************
arp_rtrequest: bad gateway 127.0.20.1 (!AF_LINK)
BSD initialization for BSD <-> Ontap communication Done!
add host 127.0.20.1: gateway 127.0.20.1
add host 127.0.10.1: gateway 127.0.20.1
Reservation conflict found on this node's disks!
Local System ID: 118060269
Press Ctrl-C for Maintenance menu to release disks.
Disk reservations have been released
Waiting for giveback...(Press Ctrl-C to abort wait)Continuing boot...
7 mode networkDoesn't use '/etc/syslog.conf', no syslogd
Skipping adding config files for console for 0
ing configuration change is disallowed while in 10 mode.
Vdisk Snap Table for host:0 is initialized
RDB-HA ending primary
fcp_service: FCP is not licensed.
ONTAP EMS log disabled. User space <notifyd> processes EMS log file
mroot is now available
mroot is now available
filter sync'd
Wed Dec 1 18:40:19 GMT 2010
login: root
Password:
Login incorrect
login: admin
Password:
node7::> stor fail modify -node node7 -auto-giveback true
(storage failover modify)

node7::> vserver show


Virtual
Root
Server
Type
Volume
----------- ------- ---------cluster_node78
admin node7
node
node8
node
vgeorge
cluster root

Name
Name
NIS
LDAP
Aggregate Service Mapping Domain
Client
---------- ------- ------- ---------- ---------node8_george_aggr
file

file

4 entries were displayed.


node7::> vserver create -vserver vjames -rootvolume root -aggr node7_aggr1 -ns-s
witch file -rootvolume-security-style unix
[Job 8] Job succeeded: Successful
node7::> vserver show
Virtual
Root
Server
Type
Volume
----------- ------- ---------cluster_node78
admin node7
node
node8
node
vgeorge
cluster root
vjames

cluster root

Name
Name
NIS
LDAP
Aggregate Service Mapping Domain
Client
---------- ------- ------- ---------- ---------node8_george_aggr
file
node7_aggr1
file

file

file

5 entries were displayed.


node7::> vserver show -vserver vjames
Virtual Server:
Virtual Server Type:
Virtual Server UUID:
Root Volume:
Aggregate:
Name Service Switch:
Name Mapping Switch:
NIS Domain:
Root Volume Security Style:
LDAP Client:
Language:
Default Snapshot Policy:
Comment:
Default Anti-Virus On-Access Policy:
Quota Policy:
node7::> vol show
(volume show)
Virtual
Server
Volume
--------- -----------node7
vol0
node8
vol0
vgeorge root
vgeorge

volume1

vjames

root

vjames
cluster
02852209-fd7b-11df-b44d-123478563412
root
node7_aggr1
file
file
unix
C
default
default
-

Aggregate
State
------------ ---------aggr0
online
aggr0_node8 online
node8_george_aggr
online
node8_george_aggr
online
node7_aggr1 online

Type
Size Available Used%
---- ---------- ---------- ----RW
53.87GB
41.92GB 22%
RW
53.87GB
41.92GB 22%
RW

20MB

15.91MB

20%

RW
RW

20MB
20MB

15.91MB
15.91MB

20%
20%

5 entries were displayed.


node7::> vol show -vserver vjames -volume root
(volume show)
Virtual Server Name:
Volume Name:
Aggregate Name:
Volume Size:
Volume Data Set ID:
Volume Master Data Set ID:
Volume State:
Volume Type:
Volume Style:
Volume Ownership:
Export Policy:
User ID:
Group ID:
Security Style:
Unix Permissions:
Junction Path:
Junction Path Source:
Junction Active:
Parent Volume:
Comment:
Available Size:
Total Size:
Used Size:
Used Percentage:
Total Files (for user-visible data):
Files Used (for user-visible data):
Space Guarantee Style:
Space Guarantee In Effect:
Percent of Space Reserved for Snapshots:
Used Percent of Snapshot Reserve:
Snapshot Policy:
Creation Time:
Anti-Virus On-Access Policy:
Inconsistency in the file system:

vjames
root
node7_aggr1
20MB
1027
2147484675
online
RW
flex
cluster
default
0
0
unix
---rwx-----/
15.91MB
16MB
92KB
20%
566
96
volume
true
20%
0%
default
Wed Dec 01 18:44:19 2010
default
false

node7::> stor aggr show


(storage aggregate show)
Aggregate
Size Available Used% State
#Vols Nodes
RAID Status
--------- -------- --------- ----- ------- ------ ---------------- -----------aggr0
56.76GB
2.59GB 95% online
1 node7
raid_dp
aggr0_node8
56.76GB
2.59GB 95% online
1 node8
raid_dp
node7_aggr1
56.76GB 56.74GB
0% online
1 node7
raid_dp
node8_george_aggr
170.3GB 170.2GB
0% online
2 node8
raid_dp
4 entries were displayed.
node7::> vol create -vserver vjames -volume volume1 -aggr node7_aggr1 -junctionpath /volume1
(volume create)
[Job 9] Job succeeded: Successful
node7::> vol show
(volume show)

Virtual
Server
--------node7
node8
vgeorge

Volume
-----------vol0
vol0
root

Aggregate
State
------------ ---------aggr0
online
aggr0_node8 online
node8_george_aggr
online
vgeorge volume1
node8_george_aggr
online
vjames
root
node7_aggr1 online
vjames
volume1
node7_aggr1 online
6 entries were displayed.

Type
Size Available Used%
---- ---------- ---------- ----RW
53.87GB
41.92GB 22%
RW
53.87GB
41.92GB 22%
RW

20MB

15.91MB

20%

RW
RW
RW

20MB
20MB
20MB

15.91MB
15.91MB
15.91MB

20%
20%
20%

node7::> vol show -vserver vjames -vol volume1


(volume show)
ERROR: Ambiguous argument. Possible matches include:
-volume
-volume-style
-volume-ownership
node7::> vol show -vserver vjames -volume volume1
(volume show)
Virtual Server Name:
Volume Name:
Aggregate Name:
Volume Size:
Volume Data Set ID:
Volume Master Data Set ID:
Volume State:
Volume Type:
Volume Style:
Volume Ownership:
Export Policy:
User ID:
Group ID:
Security Style:
Unix Permissions:
Junction Path:
Junction Path Source:
Junction Active:
Parent Volume:
Comment:
Available Size:
Total Size:
Used Size:
Used Percentage:
Total Files (for user-visible data):
Files Used (for user-visible data):
Space Guarantee Style:
Space Guarantee In Effect:
Percent of Space Reserved for Snapshots:
Used Percent of Snapshot Reserve:
Snapshot Policy:
Creation Time:
Anti-Virus On-Access Policy:
Inconsistency in the file system:
node7::> vserver show
Virtual
Root

vjames
volume1
node7_aggr1
20MB
1028
2147484676
online
RW
flex
cluster
default
0
0
unix
---rwx-----/volume1
RW_volume
true
root
15.91MB
16MB
92KB
20%
566
96
volume
true
20%
0%
default
Wed Dec 01 18:48:33 2010
default
false
Name

Name

NIS

LDAP

Server
Type
----------- ------cluster_node78
admin
node7
node
node8
node
vgeorge
cluster
vjames

Volume
Aggregate Service Mapping Domain
Client
---------- ---------- ------- ------- ---------- ---------root

node8_george_aggr
file
node7_aggr1
file

cluster root

file

file

5 entries were displayed.


node7::>

(Login timeout will occur in 60 seconds)

node7::>

(Login timeout will occur in 50 seconds)

node7::>

(Login timeout will occur in 40 seconds)

node7::>

(Login timeout will occur in 30 seconds)

node7::>

(Login timeout will occur in 20 seconds)

node7::>

(Login timeout will occur in 10 seconds)

node7::>
Exiting due to timeout
login:
login:
login:
login:
login:
login:
login:
login:
login:
login:
login:
login:
login:
login:
login:
login:
login:
login:
login:
login:
login:
login:
login:
login: admin
Password:
node7::> net port show
(network port show)
Node Port
------ -----node7
e0a
e0b
e0c

Auto-Negot Duplex
Speed (Mbps)
Role
Link MTU Admin/Oper Admin/Oper Admin/Oper
------------ ---- ----- ----------- ---------- -----------cluster
cluster
data

up
up
up

9000 true/true full/full


9000 true/true full/full
1500 true/true full/full

auto/1000
auto/1000
auto/1000

e0d
e1a
e1b

data
node-mgmt
data

up
1500 true/true full/full
up
1500 true/true full/full
down 1500 true/true full/half

auto/1000
auto/1000
auto/10

up
up
up
up
up
down

auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/10

node8
e0a
cluster
e0b
cluster
e0c
data
e0d
data
e1a
node-mgmt
e1b
data
12 entries were displayed.

node7::> net int show


(network interface show)
Logical
Status
Server
Interface Admin/Oper
----------- ---------- ---------cluster_node78
cluster_mgmt up/up
node7
clus1
up/up
clus2
up/up
mgmt1
up/up
node8
clus1
up/up
clus2
up/up
mgmt1
up/up
7 entries were displayed.

9000
9000
1500
1500
1500
1500

true/true
true/true
true/true
true/true
true/true
true/true

full/full
full/full
full/full
full/full
full/full
full/half

Network
Current
Current Is
Address/Mask
Node
Port
Home
------------------ ------------- ------- ---10.254.144.37/22

node8

e0c

false

192.168.150.25/24 node7
192.168.150.26/24 node7
10.254.144.17/22 node7

e0a
e0b
e1a

true
true
true

192.168.150.27/24 node8
192.168.150.28/24 node8
10.254.144.18/22 node8

e0a
e0b
e1a

true
true
true

node7::> net routing-groups show


(network routing-groups show)
Routing
Server
Group
Subnet
Role
Metric
--------- --------- --------------- ------------ ------cluster_node78
c10.254.144.0/22
10.254.144.0/22 cluster-mgmt
20
node7
c192.168.150.0/24
192.168.150.0/24
cluster
30
n10.254.144.0/22
10.254.144.0/22 node-mgmt
10
node8
c192.168.150.0/24
192.168.150.0/24
cluster
30
n10.254.144.0/22
10.254.144.0/22 node-mgmt
10
5 entries were displayed.
node7::> net routing-groups route show
(network routing-groups route show)
Routing
Server
Group
Destination
Gateway
Metric
--------- --------- --------------- --------------- -----cluster_node78
c10.254.144.0/22
0.0.0.0/0
10.254.144.1
20
node7

n10.254.144.0/22
0.0.0.0/0

10.254.144.1

10

n10.254.144.0/22
0.0.0.0/0
3 entries were displayed.

10.254.144.1

10

node8

node7::> net int show -server vjames -lif data


(network interface show)
There are no entries matching your query.
node7::> net port show
(network port show)
Node Port Role
------ ------ -----------node7
e0a
cluster
e0b
cluster
e0c
data
e0d
data
e1a
node-mgmt
e1b
data
node8
e0a
cluster
e0b
cluster
e0c
data
e0d
data
e1a
node-mgmt
e1b
data
12 entries were displayed.

Auto-Negot Duplex
Speed (Mbps)
Link MTU Admin/Oper Admin/Oper Admin/Oper
---- ----- ----------- ---------- -----------up
up
up
up
up
down

9000
9000
1500
1500
1500
1500

true/true
true/true
true/true
true/true
true/true
true/true

full/full
full/full
full/full
full/full
full/full
full/half

auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/10

up
up
up
up
up
down

9000
9000
1500
1500
1500
1500

true/true
true/true
true/true
true/true
true/true
true/true

full/full
full/full
full/full
full/full
full/full
full/half

auto/1000
auto/1000
auto/1000
auto/1000
auto/1000
auto/10

node7::> net create -server vjames -lif data1 -role data -home-node node7 -homeport e0c -address 10.254.144.27 -netammask 255.255.252.0
ERROR: "create" is not a recognized command
node7::> net int create -server vjames -lif data1 -role data -home-node node7 -h
ome-port e0c -address 10.254.144.27 -netmask 255.255.252.0
(network interface create)
INFO: Your interface was created successfully; the routing group
d10.254.144.0/22 was created
node7::> net int show
(network interface show)
Logical
Status
Server
Interface Admin/Oper
----------- ---------- ---------cluster_node78
cluster_mgmt up/up
node7
clus1
up/up
clus2
up/up
mgmt1
up/up
node8
clus1
up/up
clus2
up/up
mgmt1
up/up
vgeorge

Network
Current
Current Is
Address/Mask
Node
Port
Home
------------------ ------------- ------- ---10.254.144.37/22

node8

e0c

false

192.168.150.25/24 node7
192.168.150.26/24 node7
10.254.144.17/22 node7

e0a
e0b
e1a

true
true
true

192.168.150.27/24 node8
192.168.150.28/24 node8
10.254.144.18/22 node8

e0a
e0b
e1a

true
true
true

data1

up/up

10.254.144.28/22

node8

e0c

true

data1
up/up
9 entries were displayed.

10.254.144.27/22

node7

e0c

true

vjames

node7::> net int show -server vjames -lif data1


(network interface show)
Server Name:
Logical Interface:
Role:
Home Node:
Home Port:
Current Node:
Current Port:
Operational Status:
Is Home:
Network Address:
Netmask:
Bits in the Netmask:
Routing Group Name:
Administrative Status:
Failover Policy:
Firewall Policy:
Auto Revert:
Use Failover Group:
DNS Zone:
Failover Group Name:

vjames
data1
data
node7
e0c
node7
e0c
up
true
10.254.144.27
255.255.252.0
22
d10.254.144.0/22
up
nextavail
data
false
system-defined
none

node7::> net routing-groups show


(network routing-groups show)
Routing
Server
Group
Subnet
Role
Metric
--------- --------- --------------- ------------ ------cluster_node78
c10.254.144.0/22
10.254.144.0/22 cluster-mgmt
20
node7
c192.168.150.0/24
192.168.150.0/24
cluster
30
n10.254.144.0/22
10.254.144.0/22 node-mgmt
10
node8
c192.168.150.0/24
192.168.150.0/24
cluster
30
n10.254.144.0/22
10.254.144.0/22 node-mgmt
10
vgeorge
d10.254.144.0/22
10.254.144.0/22 data
20
vjames
d10.254.144.0/22
10.254.144.0/22 data
20
7 entries were displayed.
node7::> net routing-groups route show
(network routing-groups route show)
Routing

Server
Group
Destination
--------- --------- --------------cluster_node78
c10.254.144.0/22
0.0.0.0/0
node7
n10.254.144.0/22
0.0.0.0/0
node8
n10.254.144.0/22
0.0.0.0/0
3 entries were displayed.

Gateway
Metric
--------------- -----10.254.144.1

20

10.254.144.1

10

10.254.144.1

10

node7::> net routing-groups route create -server vjames -routing-group vjames_ro


uting_group -gateway 10.254.144.1
(network routing-groups route create)
ERROR: command failed: Routing group vjames_routing_group not found
node7::> net routing-groups route create -server vjames -routing-group d10.254.1
44.0/22 -gateway 10.254.144.1
(network routing-groups route create)
node7::> network routing groups show
ERROR: "groups" is not a recognized command
node7::> network routing-groups routing show
ERROR: "routing" is not a recognized command
node7::> network routing-groups routing show
ERROR: "routing" is not a recognized command
node7::>
node7::>
node7::> network routing-groups route show
Routing
Server
Group
Destination
Gateway
--------- --------- --------------- --------------cluster_node78
c10.254.144.0/22
0.0.0.0/0
10.254.144.1
node7
n10.254.144.0/22
0.0.0.0/0
10.254.144.1
node8
n10.254.144.0/22
0.0.0.0/0
10.254.144.1
vgeorge
d10.254.144.0/22
0.0.0.0/0
10.254.144.1
vjames
d10.254.144.0/22
0.0.0.0/0
10.254.144.1
5 entries were displayed.

Metric
-----20
10
10
20
20

node7::> net int migrate -server vjames -lif data1 -dest-node node8 -dest-port e
0c

(network interface migrate)


node7::> net int show
(network interface show)
Logical
Status
Server
Interface Admin/Oper
----------- ---------- ---------cluster_node78
cluster_mgmt up/up
node7
clus1
up/up
clus2
up/up
mgmt1
up/up
node8
clus1
up/up
clus2
up/up
mgmt1
up/up
vgeorge
data1
up/up
vjames
data1
up/up
9 entries were displayed.

Network
Current
Current Is
Address/Mask
Node
Port
Home
------------------ ------------- ------- ---10.254.144.37/22

node8

e0c

false

192.168.150.25/24 node7
192.168.150.26/24 node7
10.254.144.17/22 node7

e0a
e0b
e1a

true
true
true

192.168.150.27/24 node8
192.168.150.28/24 node8
10.254.144.18/22 node8

e0a
e0b
e1a

true
true
true

10.254.144.28/22

node8

e0c

true

10.254.144.27/22

node8

e0c

false

node7::> net int show -server vjames -lif data1


(network interface show)
Server Name:
Logical Interface:
Role:
Home Node:
Home Port:
Current Node:
Current Port:
Operational Status:
Is Home:
Network Address:
Netmask:
Bits in the Netmask:
Routing Group Name:
Administrative Status:
Failover Policy:
Firewall Policy:
Auto Revert:
Use Failover Group:
DNS Zone:
Failover Group Name:

vjames
data1
data
node7
e0c
node8
e0c
up
false
10.254.144.27
255.255.252.0
22
d10.254.144.0/22
up
nextavail
data
false
system-defined
none

node7::> net int revert -server vjames -lif data1


(network interface revert)
node7::> net int show -sever vjames -lif data* -instance
(network interface show)
ERROR: invalid argument "-sever"
node7::> net int show -server vjames -lif data* -instance
(network interface show)
Server Name: vjames
Logical Interface: data1

Role:
Home Node:
Home Port:
Current Node:
Current Port:
Operational Status:
Is Home:
Network Address:
Netmask:
Bits in the Netmask:
Routing Group Name:
Administrative Status:
Failover Policy:
Firewall Policy:
Auto Revert:
Use Failover Group:
DNS Zone:
Failover Group Name:

data
node7
e0c
node7
e0c
up
true
10.254.144.27
255.255.252.0
22
d10.254.144.0/22
up
nextavail
data
false
system-defined
none

node7::> net int failover show


(network interface failover show)
Logical
Target
Server
Interface Priority Node
------------ --------- -------- -----------------cluster_node78
cluster_mgmt
0
node7
1
node7
2
node7
3
node8
4
node8
5
node8
vgeorge
data1
0
node8
1
node8
2
node8
3
node7
4
node7
5
node7
vjames
data1
0
node7
1
node7
2
node7
Logical
Server
Interface Priority
------------ --------- -------vjames
data1
3
4
5
18 entries were displayed.

Phoenix TrustedCore(tm) Server

e0d
e0c
e1b
e0c
e0d
e1b

home
current
-

e0c
e0d
e1b
e0c
e0d
e1b

current, home
-

e0c
e0d
e1b

current, home
-

Target
Target
Node
Port
Location
------------------ ----------- ----------node8
node8
node8

node7::> system node reboot -node node7


login: .
Dec 1 Uptime: 3h19m28s
System rebooting...

Target
Port
Location
----------- -----------

e0c
e0d
e1b

Copyright 1985-2004 Phoenix Technologies Ltd.


All Rights Reserved
BIOS version: 2.3.0
Portions Copyright (c) 2006-2009 NetApp All Rights Reserved
CPU= Dual Core AMD Opteron(tm) Processor 265 X 2
Testing RAM
512MB RAM tested
8192MB RAM installed
Fixed Disk 0: STEC
NACF1GM1U-B11

Boot Loader version 1.6.1


Copyright (C) 2000-2003 Broadcom Corporation.
Portions Copyright (C) 2002-2009 NetApp
CPU Type: Dual Core AMD Opteron(tm) Processor 265
Starting AUTOBOOT press Ctrl-C to abort...
Loading x86_64/freebsd/image2/kernel:....0x100000/3277608 0x520340/3198128 0x82c
ff0/562512 Entry at 0x801445e0
Loading x86_64/freebsd/image2/platform.ko:0x8b7000/147808 0x921d70/156600 0x8db1
60/456 0x948128/1200 0x8db328/616 0x9485d8/1848 0x8db590/15629 0x8df2a0/20870 0x
8e4428/80 0x948d10/240 0x8e4478/576 0x948e00/1728 0x8e46b8/304 0x9494c0/912 0x8e
47e8/48 0x949850/144 0x8e4820/48000 0x9498e0/56712 0x8f03a0/425 0x90ae70/3090 0x
921c81/237 0x90ba88/47400 0x9173b0/43217
Starting program at 0x801445e0
NetApp Data ONTAP Release 8.0RC1 Cluster-Mode
Copyright (C) 1992-2009 NetApp.
All rights reserved.
*******************************
*
*
* Press Ctrl-C for Boot Menu. *
*
*
*******************************
arp_rtrequest: bad gateway 127.0.20.1 (!AF_LINK)
BSD initialization for BSD <-> Ontap communication Done!
add host 127.0.20.1: gateway 127.0.20.1
add host 127.0.10.1: gateway 127.0.20.1
Reservation conflict found on this node's disks!
Local System ID: 118060269
Press Ctrl-C for Maintenance menu to release disks.
Disk reservations have been released
Waiting for giveback...(Press Ctrl-C to abort wait)
This node was previously declared dead.
Pausing to check HA partner status ...
partner is operational and in takeover mode.
You must initiate a giveback or shutdown on the HA
partner in order to bring this node online.
The HA partner is currently operational and in takeover mode.This node cannot co
ntinue unless you initiate a giveback on the partner.
Once this is done this node will reboot automatically.
waiting for giveback...

Do you wish to halt this node rather than wait [y/n]? n


The HA partner is currently operational and in takeover mode.This node cannot co
ntinue unless you initiate a giveback on the partner.
Once this is done this node will reboot automatically.
waiting for giveback...
Do you wish to halt this node rather than wait [y/n]? n
The HA partner is currently operational and in takeover mode.This node cannot co
ntinue unless you initiate a giveback on the partner.
Once this is done this node will reboot automatically.
waiting for giveback...
Do you wish to halt this node rather than wait [y/n]? n
Partner has released takeover lock.
RDB-HA ending primary
Doesn't use '/etc/syslog.conf', no syslogd
Skipping adding config files for console for 0
7 mode networking configuration change is disallowed while in 10 mode.
Vdisk Snap Table for host:0 is initialized
fcp_service: FCP is not licensed.
ONTAP EMS log disabled. User space <notifyd> processes EMS log file
mroot is now available
mroot is now available
filter sync'd
Wed Dec 1 22:01:43 GMT 2010
login: admin
Password:
node7::>
node7::>
node7::> net int revert -server vjames -lif data*
(network interface revert)
1 entry was acted on.
node7::> net int failover show
(network interface failover show)
Logical
Target
Server
Interface Priority Node
------------ --------- -------- -----------------cluster_node78
cluster_mgmt
0
node7
1
node7
2
node7
3
node8
4
node8
5
node8
vgeorge
data1
0
node8
1
node8
2
node8
3
node7
4
node7
5
node7
vjames
data1
0
node7

Target
Port
Location
----------- ----------e0d
e0c
e1b
e0c
e0d
e1b

home
current
-

e0c
e0d
e1b
e0c
e0d
e1b

current, home
-

e0c

current, home

1
2
Logical
Server
Interface Priority
------------ --------- -------vjames
data1
3
4
5
18 entries were displayed.

node7
node7

e0d
e1b

Target
Target
Node
Port
Location
------------------ ----------- ----------node8
node8
node8

node7::>

(Login timeout will occur in 60 seconds)

node7::>

(Login timeout will occur in 50 seconds)

node7::>

(Login timeout will occur in 40 seconds)

node7::>

(Login timeout will occur in 30 seconds)

node7::>

(Login timeout will occur in 20 seconds)

node7::>

(Login timeout will occur in 10 seconds)

e0c
e0d
e1b

node7::>
Exiting due to timeout
login:
login:
login:
login:
login: admin
Password:
node7::>
node7::> vserver services nis-domain shwo
ERROR: "shwo" is not a recognized command
node7::> vserver services nis-domain sh
This table is currently empty.
node7::> vserver services nis-domain create -vserver vjames -domain gx.Netappu.c
om -active true -servers 216.240.23.30
node7::> vserver service nis-domain -show
ERROR: "-show" is not a recognized command
node7::> vserver services nis-domain -show
ERROR: "-show" is not a recognized command
node7::> vserver services nis-domain show
Virtual
NIS
Server
Domain
Active Server
------------- ------------------- ------ -----------------------------------vjames
gx.Netappu.com
true 216.240.23.30
node7::> vserver services unix-user show
This table is currently empty.

node7::> vserver services unix-user create -vserver vjames -name nobody -id 6553
3
ERROR: invalid argument "-name"
node7::> vserver services unix-group create -vserver vjames -name nobody -id 655
33
node7::> vserver services unix-user show
This table is currently empty.
node7::> vserver services unix-group show
Virtual
Server
Name
ID
-------------- ------------------- ---------vjames
nobody
65533
node7::> vserver services unix-user create -vserver vjames -user nobody -id 6553
4 -primary-gid 65533
node7::> vserver services unix-group show
Virtual
Server
Name
ID
-------------- ------------------- ---------vgeorge
nobody
65533
vjames
nobody
65533
2 entries were displayed.
node7::> vserver services unix-user show
Virtual
User
User Group
Server
Name
ID
ID
-------------- --------------- ------ -----vgeorge
nobody
65534 65533
vjames
nobody
65534 65533
2 entries were displayed.
node7::> vserver show
Virtual
Root
Server
Type
Volume
----------- ------- ---------cluster_node78
admin node7
node
node8
node
vgeorge
cluster root

vjames

cluster root

Full
Name
--------------------------------

Name
Name
NIS
LDAP
Aggregate Service Mapping Domain
Client
---------- ------- ------- ---------- ---------node8_george_aggr
file

file

gx.
Netappu.
com

file

gx.
Netappu.
com

node7_aggr1
file

5 entries were displayed.


node7::> vserver
Virtual Server
--------------vgeorge
vgeorge
vjames

export-policy show
Policy Name
------------------default
policy1
default

3 entries were displayed.


node7::> vserver export-policy rule show
This table is currently empty.
node7::> vserver export-policy create -vserver vjames -policyname policy1
node7::> vserver show
Virtual
Root
Server
Type
Volume
----------- ------- ---------cluster_node78
admin node7
node
node8
node
vgeorge
cluster root

vjames

cluster root

Name
Name
NIS
LDAP
Aggregate Service Mapping Domain
Client
---------- ------- ------- ---------- ---------node8_george_aggr
file

file

gx.
Netappu.
com

file

gx.
Netappu.
com

node7_aggr1
file

5 entries were displayed.


node7::> vserver export-policy rule create ?
-vserver <vserver name>
Virtual Server
[-policyname] <text>
Policy Name
[ -ruleindex <integer> ]
Rule Index
[ -protocol {any|nfs2|nfs3|nfs|cifs|nfs4|flexcache}, ... ]
Access Protocol
[-clientmatch] <text>
Client Match Spec
[-rorule] {any|none|never|krb5|ntlm|sys}, ...
RO Access Rule
[-rwrule] {any|none|never|krb5|ntlm|sys}, ...
RW Access Rule
[ -anon <text> ]
User ID To Which Anonymous Users Are Mapped
[ -superuser {any|none|never|krb5|ntlm|sys}, ... ]
Superuser Security Flavors
[ -allow-suid [{true|false}] ] Honor SetUID Bits In SETATTR (default: true)
[ -allow-dev [{true|false}] ] Allow Creation of Devices (default: true)
node7::> vserver export-policy rule create -vserver vjames -policyname policy1 clientmatch 0.0.0.0/0 -rorule any -rwrule any
node7::> vserver export-policy rule show -instance
Virtual Server:
Policy Name:
Rule Index:
Access Protocol:
Client Match Spec:
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Flavors:
Honor SetUID Bits In SETATTR:
Allow Creation of Devices:

vgeorge
policy1
1
any
0.0.0.0/0
any
any
65534
never
true
true

Virtual Server: vjames

Policy Name:
Rule Index:
Access Protocol:
Client Match Spec:
RO Access Rule:
RW Access Rule:
User ID To Which Anonymous Users Are Mapped:
Superuser Security Flavors:
Honor SetUID Bits In SETATTR:
Allow Creation of Devices:
2 entries were displayed.

policy1
1
any
0.0.0.0/0
any
any
65534
never
true
true

node7::> vserver nfs show


This table is currently empty.
node7::> vserver nfs create -vserver vjames
node7::> vserver nfs shwo
ERROR: "shwo" is not a recognized command
node7::> vserver nfs
Virtual
General
Server
Access
------------ ------vjames
true

sh
Default
v2
v3
UDP
TCP
Windows User
------- -------- -------- -------- -----------enabled enabled enabled enabled -

node7::> vserver export-policy rule create -vserver vjames -policyname default clientmatch 0.0.0.0/0 -rorule any -rwrule any
node7::> volume modify -vserver vjames -vol root -user nobody -group nobody
node7::> volume modify -vserver vjames -vol * -user nobody -group nobody
2 entries were modified.
node7::> vserver cifs show
Virtual
Server
Server
Name
----------- --------------vgeorge
10.254.144.28

Domain/Workgroup
Name
---------------NAU01

Authentication
Style
-------------domain

node7::> vserver cifs domain discovered-servers show


Node: node7
Virtual Server: vgeorge
Domain Name
Type
--------------- -------""
NIS
nau01.netappu.com
MS-LDAP
nau01.netappu.com
MS-DC

Preference DC-Name
DC-Address
Status
---------- --------------- --------------- --------preferred 216.240.23.30 216.240.23.30 OK
adequate

svldc01

10.254.132.50

OK

adequate

svldc01

10.254.132.50

OK

Node: node8
Virtual Server: vgeorge
Domain Name
Type
Preference DC-Name
DC-Address
Status
--------------- -------- ---------- --------------- --------------- --------""
NIS
preferred 216.240.23.30 216.240.23.30 OK

nau01.netappu.com
MS-LDAP adequate
nau01.netappu.com
MS-DC
adequate
6 entries were displayed.

svldc01

10.254.132.50

OK

svldc01

10.254.132.50

OK

node7::> vserver cifs create -vserver vjames -domain nau01.netappu.com


ERROR: command failed: "cifs-server" is a required field
node7::> vserver cifs create -vserver vjames -domain nau01.netappu.com -cifs-ser
ver 10.254.144.27
In order to create an Active Directory machine account for the CIFS server, you
must supply the name and password of a Windows account with sufficient
privileges to add computers to the "CN=Computers" container within the
"nau01.netappu.com" domain.
Enter the user name: Administrator
Enter the password:
node7::> vserver cifs show
Virtual
Server
Server
Name
----------- --------------vgeorge
10.254.144.28
vjames
10.254.144.27
2 entries were displayed.

Domain/Workgroup
Name
---------------NAU01
NAU01

Authentication
Style
-------------domain
domain

node7::> vserver cifs share show


Virtual Server Share
Path
-------------- ------------- ----------------vgeorge
admin$
/
vgeorge
ipc$
/
vgeorge
root
/

vgeorge

rootsnaps

vjames
admin$
vjames
ipc$
6 entries were displayed.

/
/
/

Properties
---------browsable
browsable
oplocks
browsable
changenoti
fy
browsable
showsnapsh
ot
browsable
browsable

Comment
--------

ACL
----------Everyone /
Full
Control

Everyone /
Full
Control
-

node7::> vserver cifs share create -vserver vjames -share-name root -path /
node7::> vserver cifs share show
Virtual Server Share
Path
-------------- ------------- ----------------vgeorge
admin$
/
vgeorge
ipc$
/
vgeorge
root
/

vgeorge

rootsnaps

Properties
---------browsable
browsable
oplocks
browsable
changenoti
fy
browsable
showsnapsh
ot

Comment
--------

ACL
----------Everyone /
Full
Control

Everyone /
Full
Control

vjames
vjames
vjames

admin$
ipc$
root

/
/
/

browsable browsable oplocks


browsable
changenoti
fy

Everyone /
Full
Control

7 entries were displayed.


node7::> vserver cifs share create -vserver vjames -share-name rootsnaps -path /
-share-properties browsable,showsnapshot
node7::> vserver cifs share show
Virtual Server Share
Path
ties Comment ACL
-------------- ------------- ----------------- ---------- -------vgeorge
admin$
/
browsable vgeorge
ipc$
/
browsable vgeorge
root
/
oplocks
browsable
changenoti
fy
vgeorge
rootsnaps
/
browsable showsnapsh
ot
vjames
admin$
/
browsable vjames
ipc$
/
browsable vjames
root
/
oplocks
browsable
changenoti
fy
vjames
rootsnaps
/
browsable showsnapsh
ot
8 entries were displayed.
node7::> vol show -vserver vjames vol root
(volume show)
ERROR: 'root' was not expected. Please specify -fieldname first.
node7::> vol show -vserver vjames -vol root
(volume show)
ERROR: Ambiguous argument. Possible matches include:
-volume
-volume-style
-volume-ownership
node7::> vol show -vserver vjames -volume root
(volume show)
Virtual Server Name:
Volume Name:
Aggregate Name:
Volume Size:
Volume Data Set ID:
Volume Master Data Set ID:
Volume State:
Volume Type:
Volume Style:
Volume Ownership:

vjames
root
node7_aggr1
20MB
1027
2147484675
online
RW
flex
cluster

Proper
----------Everyone /
Full
Control
Everyone /
Full
Control
Everyone /
Full
Control
Everyone /
Full
Control

Export Policy:
User ID:
Group ID:
Security Style:
Unix Permissions:
Junction Path:
Junction Path Source:
Junction Active:
Parent Volume:
Comment:
Available Size:
Total Size:
Used Size:
Used Percentage:
Total Files (for user-visible data):
Files Used (for user-visible data):
Space Guarantee Style:
Space Guarantee In Effect:
Percent of Space Reserved for Snapshots:
Used Percent of Snapshot Reserve:
Snapshot Policy:
Creation Time:
Anti-Virus On-Access Policy:
Inconsistency in the file system:

default
nobody
nobody
unix
---rwx-----/
12.00MB
16MB
4.00MB
40%
566
334
volume
true
20%
6%
default
Wed Dec 01 18:44:19 2010
default
false

node7::> vservser name-mapping show


ERROR: "vservser" is not a recognized command
node7::> vserver name-mapping show
Virtual Server Direction Position
-------------- --------- -------vgeorge
win-unix 1
Pattern:
Replacement:
vgeorge
unix-win 1
Pattern:
Replacement:
2 entries were displayed.

NAU01\\Administrator
nobody
nobody
NAU01\\Administrator

node7::> vserver name-mapping create -vserver vjames -direction win-unix -positi


on 1 -pattern NAU01\\Administrator -replacment nobody
ERROR: invalid argument "-replacment"
node7::> vserver name-mapping create -vserver vjames -direction win-unix -positi
on 1 -pattern NAU01\\Administrator -replacement nobody
node7::> vserver name-mapping create -vserver vjames -direction unix-win -positi
on 1 -patern nobody -replacement NAU01\\Administrator
ERROR: invalid argument "-patern"
node7::> vserver name-mapping create -vserver vjames -direction unix-win -positi
on 1 -pattern nobody -replacement NAU01\\Administrator
node7::> vserver name-mapping show
Virtual Server Direction Position
-------------- --------- -------vgeorge
win-unix 1
Pattern: NAU01\\Administrator
Replacement: nobody
vgeorge
unix-win 1
Pattern: nobody

vjames

win-unix 1

vjames

unix-win 1

Replacement:
Pattern:
Replacement:
Pattern:
Replacement:

NAU01\\Administrator
NAU01\\Administrator
nobody
nobody
NAU01\\Administrator

4 entries were displayed.


login: admin
Password:
node7::>
node7::>
node7::> volume create -vserver vjames -volume root_ls1 -aggr node7_aggr1 -type
DP
[Job 23] Job succeeded: Successful
node7::> vol status
ERROR: "status" is not a recognized command
node7::> vol
node7::volume> ?
copy
create
delete
member>
modify
mount
move
qtree>
quota>
rename
show
snapshot>
unmount
node7::volume> show
Virtual
Server
Volume
--------- -----------node7
vol0
node8
vol0
vgeorge root

Make a copy of a volume


Create a new volume
Delete an existing volume
Manage constituent volumes of striped volumes
Modify volume attributes
Mount a volume on another volume with a
junction-path
Move a volume from one aggregate to another
aggregate
Manage qtrees
Manage Quotas, Policies, Rules and Reports
Rename an existing volume
Display a list of volumes
Manage snapshots
Unmount a volume

Aggregate
State
------------ ---------aggr0
online
aggr0_node8 online
node8_george_aggr
online
vgeorge root_dp1
node8_george_aggr
online
vgeorge root_dp2
node8_george_aggr
online
vgeorge root_ls1
node8_george_aggr
online
vgeorge root_ls2
node7_aggr1 online
vgeorge volume1
node8_george_aggr
online
vjames
root
node7_aggr1 online
vjames
root_ls1
node7_aggr1 online
vjames
volume1
node7_aggr1 online
11 entries were displayed.

Type
Size Available Used%
---- ---------- ---------- ----RW
53.87GB
41.88GB 22%
RW
53.87GB
41.88GB 22%
RW

20MB

4.07MB

79%

DP

20MB

4.07MB

79%

DP

20MB

15.91MB

20%

LS
LS

20MB
20MB

4.07MB
4.07MB

79%
79%

RW
RW
DP
RW

20MB
20MB
20MB
20MB

15.90MB
7.46MB
15.91MB
15.90MB

20%
62%
20%
20%

node7::volume> cluster show


Node
Health
--------------------- ------node7
true
node8
true
2 entries were displayed.

Eligibility
-----------true
true

node7::volume> cluster
node7::cluster> ?
ha>
identity>
modify
show
statistics>

Manage high-availability configuration


Manage the cluster's attributes, including name
and serial number
Modify cluster node membership attributes
Display cluster node members
Display cluster statistics

node7::cluster> identity
node7::cluster identity> ?
modify
show
node7::cluster identity> show
Cluster Name Serial Number
------------ ------------cluster_node78
1-80-123456

Modify the cluster's attributes


Display the cluster's attributes including name,
location, contact and license
Location
-------------

Contact
-----------

NetappU

node7::cluster identity> snapmirror create -source-cluster cluster_node78 -sourc


e-vserver vjames -source-volume root -destination-cluster cluster_node78 -destin
ation-vserver vjames -destination-volume root_ls1 -type LS
[Job 27] Job is queued: snapmirror create the relationship with destination clus
[Job 27] Job succeeded: SnapMirror: done
node7::cluster identity> volume create -vserver vjames root_ls2 -aggr node8_aggr
1 -type DP
ERROR: command failed: Aggregate node8_aggr1 not found. Reason: entry doesn't
exist
node7::cluster identity> volume create -vserver vjames root_ls2 -aggr node8_geor
ge_aggr -type DP
[Job 29] Job succeeded: Successful
node7::cluster identity> snapmirror create -source-path cluster_node78://vjames/
root -destination-destination-path
-destination-cluster -destination-vserver
-destination-volume
node7::cluster identity> snapmirror create -source-path cluster_node78://vjames/
root -destination-path cluster_node78://vjames/root_ls2 -type LS
[Job 30] Job is queued: snapmirror create the relationship with destination clus
[Job 30] Job succeeded: SnapMirror: done
node7::cluster identity> snapmirror show
Source Path
Type Destination Path
------------------------------------ ------ ----------------------------------cluster_node78://vgeorge/root
DP
cluster_node78://vgeorge/root_dp1
cluster_node78://vgeorge/root_dp2

LS
cluster_node78://vjames/root

LS

cluster_node78://vgeorge/root_ls1
cluster_node78://vgeorge/root_ls2
cluster_node78://vjames/root_ls1
cluster_node78://vjames/root_ls2

6 entries were displayed.


node7::cluster identity> snapmirror show -instance
Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vgeorge/root
cluster_node78
vgeorge
root
cluster_node78://vgeorge/root_dp1
cluster_node78
vgeorge
root_dp1
DP
vgeorge

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vgeorge/root
cluster_node78
vgeorge
root
cluster_node78://vgeorge/root_dp2
cluster_node78
vgeorge
root_dp2
DP
vgeorge

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:

cluster_node78://vgeorge/root
cluster_node78
vgeorge
root
cluster_node78://vgeorge/root_ls1
cluster_node78
vgeorge
root_ls1

8
unlimited
Snapmirrored
12/02 17:31:31
Idle
0.00B
0.00B
snapmirror.3_2147484679.2010-12-02_173131
snapmirror.3_2147484679.2010-12-02_173131

8
unlimited
Snapmirrored
12/02 17:31:42
Idle
0.00B
0.00B
snapmirror.3_2147484680.2010-12-02_173142
snapmirror.3_2147484680.2010-12-02_173142

Snapmirror Relationship Type:


Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

LS
vgeorge

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vgeorge/root
cluster_node78
vgeorge
root
cluster_node78://vgeorge/root_ls2
cluster_node78
vgeorge
root_ls2
LS
vgeorge

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vjames/root
cluster_node78
vjames
root
cluster_node78://vjames/root_ls1
cluster_node78
vjames
root_ls1
LS
vjames

8
unlimited
Snapmirrored
12/02 17:19:08
Idle
0.00B
0.00B
snapmirror.3_2147484673.2010-12-02_171908
snapmirror.3_2147484673.2010-12-02_171908

8
unlimited
Snapmirrored
12/02 17:19:08
Idle
0.00B
0.00B
snapmirror.3_2147484673.2010-12-02_171908
snapmirror.3_2147484673.2010-12-02_171908

8
unlimited
Uninitialized
Idle
0.00B
0.00B
-

Source Path: cluster_node78://vjames/root


Source Cluster: cluster_node78

Source Virtual Server:


Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:
6 entries were displayed.

vjames
root
cluster_node78://vjames/root_ls2
cluster_node78
vjames
root_ls2
LS
vjames
8
unlimited
Uninitialized
Idle
0.00B
0.00B
-

node7::cluster identity> snapmirror update-ls-set -source-path cluster_node78://


vjames/root
[Job 32] Job is queued: snapmirror update-ls-set for source cluster_node78://vja
mes/root.
node7::cluster identity> volume create -vserver vjames -volume root_dp1 -aggr no
de7_aggr1 -type DP
[Job 33] Job succeeded: Successful
node7::cluster identity> volume create -vserver vjames -volume root_dp2 -aggr no
de8_george_aggr -type DP
[Job 35] Job succeeded: Successful
node7::cluster identity> snapmirror create -source-path cluster_node78://vjames/
root -destination-path cluster_node78://vjames/root_dp1
[Job 36] Job is queued: snapmirror create the relationship with destination clus
[Job 36] Job succeeded: SnapMirror: done
node7::cluster identity> snapmirror create -source-path cluster_node78://vjames/
root -destination-path cluster_node78://vjames/root_dp1 -type DP
ERROR: command failed: relationship with destination
cluster_node78://vjames/root_dp1 already exists
node7::cluster identity> snapmirror create -source-path cluster_node78://vjames/
root -destination-path cluster_node78://vjames/root_dp2 -type DP
[Job 38] Job is queued: snapmirror create the relationship with destination clus
[Job 38] Job succeeded: SnapMirror: done
node7::cluster identity> snapmirror ?
abort
Abort an active transfer
create
Create a new snapmirror relationship
delete
Delete a snapmirror relationship
initialize
Start a baseline transfer
initialize-ls-set
Start a baseline load-sharing set transfer
modify
Modify a snapmirror relationship
promote
Promote the destination to read-write
resync
Start a resynchronize operation
show
Display a list of snapmirror relationships

update
update-ls-set

Start an incremental transfer


Start an incremental load-sharing set transfer

node7::cluster identity> snapmirror modify ?


{ [ -source-path|-S <[cluster:][//vserver/]volume> ]
Source Path
| [ -source-cluster <cluster_name> ]
Source Cluster
[ -source-vserver <vserver name> ]
Source Virtual Server
[ -source-volume <volume name> ] }
Source Volume
{ [-destination-path] <[cluster:][//vserver/]volume>
Destination Path
| -destination-cluster <cluster_name> Destination Cluster
[-destination-vserver] <vserver name> Destination Virtual Server
[-destination-volume] <volume name> } Destination Volume
[[-vserver] <vserver name>]
Managing Virtual Server
[ -schedule <text> ]
SnapMirror Schedule
[ -tries {<integer>|unlimited} ]
Tries Limit
[ -throttle|-k {<integer>|unlimited} ]
Throttle (KB/sec)
node7::cluster identity> snapmirror modify -source-path cluster_node78://vjames/
root -destination-path cluster_node78://vjames/root_dp1 -type DP
ERROR: invalid argument "-type"
node7::cluster identity> snapmirror show
Source Path
Type Destination Path
------------------------------------ ------ ----------------------------------cluster_node78://vgeorge/root
DP
cluster_node78://vgeorge/root_dp1
cluster_node78://vgeorge/root_dp2
LS
cluster_node78://vgeorge/root_ls1
cluster_node78://vgeorge/root_ls2
cluster_node78://vjames/root
DP
cluster_node78://vjames/root_dp1
cluster_node78://vjames/root_dp2
LS
cluster_node78://vjames/root_ls1
cluster_node78://vjames/root_ls2
8 entries were displayed.
node7::cluster identity> snapmirror update -source-path cluster_node78://vjames/
root -destination-destination-path
-destination-cluster -destination-vserver
-destination-volume
node7::cluster identity> snapmirror update -source-path cluster_node78://vjames/
root -destination-path cluster_node78://vjames/root_dp1
[Job 41] Job is queued: snapmirror update of destination cluster_node78://vjames
/root_dp1.
node7::cluster identity> vol snapshot show -v
-vserver -volume
node7::cluster identity> vol snapshot show -vserver vjames -volume root
(volume snapshot show)
---Blocks--Vserver Volume Snapshot
Size Total% Used%
-------- ------- ---------------------------------- ------------ ------ ----vjames root
daily.2010-12-02_0010
44KB
0%
1%
hourly.2010-12-02_1205
52KB
0%
1%
hourly.2010-12-02_1305
52KB
0%
1%
hourly.2010-12-02_1405
52KB
0%
1%
hourly.2010-12-02_1505
52KB
0%
1%

hourly.2010-12-02_1605
52KB
hourly.2010-12-02_1705
52KB
snapmirror.4_2147484675.2010-12-02_173733
44KB
snapmirror.4_2147484684.2010-12-02_174847
40KB
9 entries were displayed.

0%
0%
0%

1%
1%
1%

0%

0%

node7::cluster identity> snapmirror show -inst


Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vgeorge/root
cluster_node78
vgeorge
root
cluster_node78://vgeorge/root_dp1
cluster_node78
vgeorge
root_dp1
DP
vgeorge

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vgeorge/root
cluster_node78
vgeorge
root
cluster_node78://vgeorge/root_dp2
cluster_node78
vgeorge
root_dp2
DP
vgeorge

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:

cluster_node78://vgeorge/root
cluster_node78
vgeorge
root
cluster_node78://vgeorge/root_ls1
cluster_node78

8
unlimited
Snapmirrored
12/02 17:31:31
Idle
0.00B
0.00B
snapmirror.3_2147484679.2010-12-02_173131
snapmirror.3_2147484679.2010-12-02_173131

8
unlimited
Snapmirrored
12/02 17:31:42
Idle
0.00B
0.00B
snapmirror.3_2147484680.2010-12-02_173142
snapmirror.3_2147484680.2010-12-02_173142

Destination Virtual Server:


Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

vgeorge
root_ls1
LS
vgeorge
5min
8
unlimited
Snapmirrored
12/02 17:50:01
Idle
0.00B
0.00B
snapmirror.3_2147484673.2010-12-02_175001
snapmirror.3_2147484673.2010-12-02_175001

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vgeorge/root
cluster_node78
vgeorge
root
cluster_node78://vgeorge/root_ls2
cluster_node78
vgeorge
root_ls2
LS
vgeorge
5min
8
unlimited
Snapmirrored
12/02 17:50:01
Idle
0.00B
0.00B
snapmirror.3_2147484673.2010-12-02_175001
snapmirror.3_2147484673.2010-12-02_175001

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vjames/root
cluster_node78
vjames
root
cluster_node78://vjames/root_dp1
cluster_node78
vjames
root_dp1
DP
vjames
8
unlimited
Snapmirrored
12/02 17:48:47
Idle
0.00B
0.00B
snapmirror.4_2147484684.2010-12-02_174847
snapmirror.4_2147484684.2010-12-02_174847

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vjames/root
cluster_node78
vjames
root
cluster_node78://vjames/root_dp2
cluster_node78
vjames
root_dp2
DP
vjames

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vjames/root
cluster_node78
vjames
root
cluster_node78://vjames/root_ls1
cluster_node78
vjames
root_ls1
LS
vjames

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:

cluster_node78://vjames/root
cluster_node78
vjames
root
cluster_node78://vjames/root_ls2
cluster_node78
vjames
root_ls2
LS
vjames

8
unlimited
Uninitialized
Idle
0.00B
0.00B
-

8
unlimited
Snapmirrored
12/02 17:37:33
Idle
0.00B
0.00B
snapmirror.4_2147484675.2010-12-02_173733
snapmirror.4_2147484675.2010-12-02_173733

8
unlimited
Snapmirrored
12/02 17:37:33
Idle

Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:
8 entries were displayed.

0.00B
0.00B
snapmirror.4_2147484675.2010-12-02_173733
snapmirror.4_2147484675.2010-12-02_173733

node7::cluster identity> snapmirror update -source-path cluster_node78://vjames/


root -destination-destination-path
-destination-cluster -destination-vserver
-destination-volume
node7::cluster identity> snapmirror update -source-path cluster_node78://vjames/
root -destination-path cluster_node78://vjames/root_dp2 -foreground true
[Job 44] Job is queued: snapmirror update of destination cluster_node78://vjames
[Job 44] 8.52MB sent for 1 of 10 Snapshot copies, transferring Snapshot copy hou
[Job 44] 8.52MB sent for 3 of 10 Snapshot copies, transferring Snapshot copy hou
[Job 44] 8.52MB sent for 5 of 10 Snapshot copies, transferring Snapshot copy hou
[Job 44] 8.52MB sent for 8 of 10 Snapshot copies, transferring Snapshot copy sna
[Job 44] 8.52MB sent for 9 of 10 Snapshot copies, transferring Snapshot copy sna
[Job 44] Job succeeded: SnapMirror: done
node7::cluster identity> snapmirror show -inst
Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vgeorge/root
cluster_node78
vgeorge
root
cluster_node78://vgeorge/root_dp1
cluster_node78
vgeorge
root_dp1
DP
vgeorge
hourly
8
unlimited
Snapmirrored
12/02 17:31:31
Idle
0.00B
0.00B
snapmirror.3_2147484679.2010-12-02_173131
snapmirror.3_2147484679.2010-12-02_173131

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:

cluster_node78://vgeorge/root
cluster_node78
vgeorge
root
cluster_node78://vgeorge/root_dp2
cluster_node78
vgeorge
root_dp2
DP
vgeorge
8
unlimited
Snapmirrored
12/02 17:31:42

Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

Idle
0.00B
0.00B
snapmirror.3_2147484680.2010-12-02_173142
snapmirror.3_2147484680.2010-12-02_173142

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vgeorge/root
cluster_node78
vgeorge
root
cluster_node78://vgeorge/root_ls1
cluster_node78
vgeorge
root_ls1
LS
vgeorge
5min
8
unlimited
Snapmirrored
12/02 17:55:00
Idle
0.00B
0.00B
snapmirror.3_2147484673.2010-12-02_175500
snapmirror.3_2147484673.2010-12-02_175500

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vgeorge/root
cluster_node78
vgeorge
root
cluster_node78://vgeorge/root_ls2
cluster_node78
vgeorge
root_ls2
LS
vgeorge
5min
8
unlimited
Snapmirrored
12/02 17:55:00
Idle
0.00B
0.00B
snapmirror.3_2147484673.2010-12-02_175500
snapmirror.3_2147484673.2010-12-02_175500

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:

cluster_node78://vjames/root
cluster_node78
vjames
root
cluster_node78://vjames/root_dp1
cluster_node78
vjames
root_dp1
DP

Managing Virtual Server:


SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

vjames
8
unlimited
Snapmirrored
12/02 17:48:47
Idle
0.00B
0.00B
snapmirror.4_2147484684.2010-12-02_174847
snapmirror.4_2147484684.2010-12-02_174847

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vjames/root
cluster_node78
vjames
root
cluster_node78://vjames/root_dp2
cluster_node78
vjames
root_dp2
DP
vjames

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vjames/root
cluster_node78
vjames
root
cluster_node78://vjames/root_ls1
cluster_node78
vjames
root_ls1
LS
vjames

8
unlimited
Snapmirrored
12/02 17:54:34
Idle
0.00B
0.00B
snapmirror.4_2147484685.2010-12-02_175434
snapmirror.4_2147484685.2010-12-02_175434

8
unlimited
Snapmirrored
12/02 17:37:33
Idle
0.00B
0.00B
snapmirror.4_2147484675.2010-12-02_173733
snapmirror.4_2147484675.2010-12-02_173733

Source Path: cluster_node78://vjames/root


Source Cluster: cluster_node78
Source Virtual Server: vjames

Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:
8 entries were displayed.

root
cluster_node78://vjames/root_ls2
cluster_node78
vjames
root_ls2
LS
vjames
8
unlimited
Snapmirrored
12/02 17:37:33
Idle
0.00B
0.00B
snapmirror.4_2147484675.2010-12-02_173733
snapmirror.4_2147484675.2010-12-02_173733

node7::cluster identity> volume snapshot show -vserver vjames -volume root


---Blocks--Vserver Volume Snapshot
Size Total% Used%
-------- ------- ---------------------------------- ------------ ------ ----vjames root
daily.2010-12-02_0010
44KB
0%
1%
hourly.2010-12-02_1205
52KB
0%
1%
hourly.2010-12-02_1305
52KB
0%
1%
hourly.2010-12-02_1405
52KB
0%
1%
hourly.2010-12-02_1505
52KB
0%
1%
hourly.2010-12-02_1605
52KB
0%
1%
hourly.2010-12-02_1705
52KB
0%
1%
snapmirror.4_2147484675.2010-12-02_173733
0%
1%
44KB
snapmirror.4_2147484684.2010-12-02_174847
0%
1%
44KB
snapmirror.4_2147484685.2010-12-02_175434
0%
0%
40KB
10 entries were displayed.
node7::cluster identity> volume create -vserver vjames -volume volume3 -aggr nod
e7_aggr1 -junction-path /volume3
[Job 45] Job succeeded: Successful
node7::cluster identity> snapmirror update-ls-set -source-path cluster_node78://
vjames/root
[Job 47] Job is queued: snapmirror update-ls-set for source cluster_node78://vja
mes/root.
node7::cluster identity> Renaming volume temp__1032__49__root_dp2 (fsid 70155f7)
to root_dp2: start time 72760488
node7::cluster identity>

(Login timeout will occur in 60 seconds)

node7::cluster identity>

(Login timeout will occur in 50 seconds)

node7::cluster identity>

(Login timeout will occur in 40 seconds)

node7::cluster identity>

(Login timeout will occur in 30 seconds)

node7::cluster identity>

(Login timeout will occur in 20 seconds)

node7::cluster identity>

(Login timeout will occur in 10 seconds)

node7::cluster identity>
Exiting due to timeout
login:
login:
login:
login:
login:
login:
login:
login:
login: adm
login: admin
Password:
node7::> snapmirror update-ls-set -source-path cluster_node78://vjames/root -for
eground true
[Job 54] Job is queued: snapmirror update-ls-set for source cluster_node78://vja
[Job 54] 0.00B sent for 0 of 6 Snapshot copies, transferring Snapshot copy hourl
[Job 54] 0.00B sent for 1 of 6 Snapshot copies, transferring Snapshot copy hourl
[Job 54] 0.00B sent for 2 of 6 Snapshot copies, transferring Snapshot copy 5min.
[Job 54] 0.00B sent for 4 of 6 Snapshot copies, transferring Snapshot copy 5min.
[Job 54] 4KB sent for 5 of 6 Snapshot copies, transferring Snapshot copy snapmir
[Job 54] Job succeeded: SnapMirror: done
node7::> job schedule show
Name
Type
Description
----------- --------- ----------------------------------------------------5min
cron
@:00,:05,:10,:15,:20,:25,:30,:35,:40,:45,:50,:55
avUpdateSchedule
cron
@2:00
daily
cron
@0:10
hourly
cron
@:05
weekly
cron
Sun@0:15
5 entries were displayed.
node7::> snapmirror modify -destination-path cluster_node78://vjames/root_ls1 -s
chedule 5min
[Job 55] Job is queued: snapmirror modify the relationship with destination clus
[Job 55] Job succeeded: SnapMirror: done
node7::> snapmirror show -destination-destination-path
-destination-cluster -destination-vserver
-destination-volume
node7::> snapmirror show -destination-path cluster_node78://v
cluster_node78://vgeorge/<volume> cluster_node78://vjames/<volume>
node7::> snapmirror show -destination-path cluster_node78://vjames/root_ls* -ins
tance
Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:

cluster_node78://vjames/root
cluster_node78
vjames
root
cluster_node78://vjames/root_ls1
cluster_node78
vjames
root_ls1
LS

Managing Virtual Server:


SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

vjames
5min
8
unlimited
Snapmirrored
12/02 19:35:01
Idle
0.00B
0.00B
snapmirror.4_2147484675.2010-12-02_193501
snapmirror.4_2147484675.2010-12-02_193501

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:
2 entries were displayed.

cluster_node78://vjames/root
cluster_node78
vjames
root
cluster_node78://vjames/root_ls2
cluster_node78
vjames
root_ls2
LS
vjames
5min
8
unlimited
Snapmirrored
12/02 19:35:01
Idle
0.00B
0.00B
snapmirror.4_2147484675.2010-12-02_193501
snapmirror.4_2147484675.2010-12-02_193501

node7::> snapmirror modify -destination-path cluster_node78://vjames/root_dp1 -s


chedule hourly
[Job 56] Job is queued: snapmirror modify the relationship with destination clus
[Job 56] Job succeeded: SnapMirror: done
node7::> snapmirror show -instance
Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:

cluster_node78://vgeorge/root
cluster_node78
vgeorge
root
cluster_node78://vgeorge/root_dp1
cluster_node78
vgeorge
root_dp1
DP
vgeorge
hourly
8
unlimited
Snapmirrored
12/02 19:05:01
Idle
-

Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

0.00B
0.00B
snapmirror.3_2147484679.2010-12-02_190501
snapmirror.3_2147484679.2010-12-02_190501

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vgeorge/root
cluster_node78
vgeorge
root
cluster_node78://vgeorge/root_dp2
cluster_node78
vgeorge
root_dp2
DP
vgeorge

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vgeorge/root
cluster_node78
vgeorge
root
cluster_node78://vgeorge/root_ls2
cluster_node78
vgeorge
root_ls2
LS
vgeorge
5min
8
unlimited
Snapmirrored
12/02 19:35:01
Idle
0.00B
0.00B
snapmirror.3_2147484673.2010-12-02_193501
snapmirror.3_2147484673.2010-12-02_193501

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:

cluster_node78://vjames/root
cluster_node78
vjames
root
cluster_node78://vjames/root_dp1
cluster_node78
vjames
root_dp1
DP
vjames
hourly

8
unlimited
Snapmirrored
12/02 17:31:42
Idle
0.00B
0.00B
snapmirror.3_2147484680.2010-12-02_173142
snapmirror.3_2147484680.2010-12-02_173142

Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

8
unlimited
Snapmirrored
12/02 17:48:47
Idle
0.00B
0.00B
snapmirror.4_2147484684.2010-12-02_174847
snapmirror.4_2147484684.2010-12-02_174847

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vjames/root
cluster_node78
vjames
root
cluster_node78://vjames/root_dp2
cluster_node78
vjames
root_dp2
DP
vjames

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vjames/root
cluster_node78
vjames
root
cluster_node78://vjames/root_ls1
cluster_node78
vjames
root_ls1
LS
vjames
5min
8
unlimited
Snapmirrored
12/02 19:35:01
Idle
0.00B
0.00B
snapmirror.4_2147484675.2010-12-02_193501
snapmirror.4_2147484675.2010-12-02_193501

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:

8
unlimited
Snapmirrored
12/02 17:54:34
Idle
0.00B
0.00B
snapmirror.4_2147484685.2010-12-02_175434
snapmirror.4_2147484685.2010-12-02_175434

cluster_node78://vjames/root
cluster_node78
vjames
root
cluster_node78://vjames/root_ls2

Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:
7 entries were displayed.

cluster_node78
vjames
root_ls2
LS
vjames
5min
8
unlimited
Snapmirrored
12/02 19:35:01
Idle
0.00B
0.00B
snapmirror.4_2147484675.2010-12-02_193501
snapmirror.4_2147484675.2010-12-02_193501

node7::> system date show


Node
Date
--------- ------------------node7
12/2/2010 19:37:48
node8
12/2/2010 19:37:44
2 entries were displayed.

Timezone
------------------------GMT
GMT

node7::> snapmirror show -instance


Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vgeorge/root
cluster_node78
vgeorge
root
cluster_node78://vgeorge/root_dp1
cluster_node78
vgeorge
root_dp1
DP
vgeorge
hourly
8
unlimited
Snapmirrored
12/02 19:05:01
Idle
0.00B
0.00B
snapmirror.3_2147484679.2010-12-02_190501
snapmirror.3_2147484679.2010-12-02_190501

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:

cluster_node78://vgeorge/root
cluster_node78
vgeorge
root
cluster_node78://vgeorge/root_dp2
cluster_node78
vgeorge
root_dp2
DP
vgeorge

Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

8
unlimited
Snapmirrored
12/02 17:31:42
Idle
0.00B
0.00B
snapmirror.3_2147484680.2010-12-02_173142
snapmirror.3_2147484680.2010-12-02_173142

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vgeorge/root
cluster_node78
vgeorge
root
cluster_node78://vgeorge/root_ls2
cluster_node78
vgeorge
root_ls2
LS
vgeorge
5min
8
unlimited
Snapmirrored
12/02 19:35:01
Idle
0.00B
0.00B
snapmirror.3_2147484673.2010-12-02_193501
snapmirror.3_2147484673.2010-12-02_193501

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vjames/root
cluster_node78
vjames
root
cluster_node78://vjames/root_dp1
cluster_node78
vjames
root_dp1
DP
vjames
hourly
8
unlimited
Snapmirrored
12/02 17:48:47
Idle
0.00B
0.00B
snapmirror.4_2147484684.2010-12-02_174847
snapmirror.4_2147484684.2010-12-02_174847

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:

cluster_node78://vjames/root
cluster_node78
vjames
root
cluster_node78://vjames/root_dp2

Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78
vjames
root_dp2
DP
vjames

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vjames/root
cluster_node78
vjames
root
cluster_node78://vjames/root_ls1
cluster_node78
vjames
root_ls1
LS
vjames
5min
8
unlimited
Snapmirrored
12/02 19:35:01
Idle
0.00B
0.00B
snapmirror.4_2147484675.2010-12-02_193501
snapmirror.4_2147484675.2010-12-02_193501

Source Path:
Source Cluster:
Source Virtual Server:
Source Volume:
Destination Path:
Destination Cluster:
Destination Virtual Server:
Destination Volume:
Snapmirror Relationship Type:
Managing Virtual Server:
SnapMirror Schedule:
Tries Limit:
Throttle (KB/sec):
Mirror State:
Mirror Timestamp:
Mirror Status:
Transfer Snapshot:
Snapshot Progress:
Snapshot Checkpoint:
Newest Common Snapshot:
Exported Snapshot:

cluster_node78://vjames/root
cluster_node78
vjames
root
cluster_node78://vjames/root_ls2
cluster_node78
vjames
root_ls2
LS
vjames
5min
8
unlimited
Snapmirrored
12/02 19:35:01
Idle
0.00B
0.00B
snapmirror.4_2147484675.2010-12-02_193501
snapmirror.4_2147484675.2010-12-02_193501

8
unlimited
Snapmirrored
12/02 17:54:34
Idle
0.00B
0.00B
snapmirror.4_2147484685.2010-12-02_175434
snapmirror.4_2147484685.2010-12-02_175434

7 entries were displayed.


node7::> vol show -vol root*
(volume show)
ERROR: Ambiguous argument. Possible matches include:
-volume
-volume-style
-volume-ownership
node7::> vol show -volume root*
(volume show)
Virtual
Server
Volume
Aggregate
State
--------- ------------ ------------ ---------vgeorge root
node8_george_aggr
online
vgeorge root_dp1
node8_george_aggr
online
vgeorge root_dp2
node7_aggr1 online
vgeorge root_ls2
node7_aggr1 online
vjames
root
node7_aggr1 online
vjames
root_dp1
node7_aggr1 online
vjames
root_dp2
node8_george_aggr
online
vjames
root_ls1
node7_aggr1 online
vjames
root_ls2
node8_george_aggr
online
9 entries were displayed.

Type
Size Available Used%
---- ---------- ---------- ----RW

20MB

4.06MB

79%

DP
DP
LS
RW
DP

20MB
20MB
20MB
20MB
20MB

4.05MB
4.07MB
4.05MB
7.46MB
7.46MB

79%
79%
79%
62%
62%

DP
LS

20MB
20MB

7.46MB
7.46MB

62%
62%

LS

20MB

7.45MB

62%

node7::> system node run -node node7 license add EOYPBIL


A snaprestore site license has been installed.
snaprestore enabled.
node7::> snapmirror promote -source-path cluster_node78://v
cluster_node78://vgeorge/<volume> cluster_node78://vjames/<volume>
node7::> snapmirror promote -source-path cluster_node78://vjames/root -destinati
on-destination-path
-destination-cluster -destination-vserver
-destination-volume
node7::> snapmirror promote -source-path cluster_node78://vjames/root -destinati
on-path cluster_node78://v
cluster_node78://vgeorge/<volume> cluster_node78://vjames/<volume>
node7::> snapmirror promote -source-path cluster_node78://vjames/root -destinati
on-path cluster_node78://vjames/root_ls1
Warning: promote will delete the read-write volume cluster_node78://vjames/root
and replace it with cluster_node78://vjames/root_ls1
Do you want to continue? {y|n}: y
[Job 57] Job is queued: snapmirror promote of destination cluster_node78://vjame
[Job 57] Job succeeded: SnapMirror: done
node7::> vol show -volume root*
(volume show)
Virtual
Server
Volume
Aggregate
State
--------- ------------ ------------ ---------vgeorge root
node8_george_aggr
online
vgeorge root_dp1
node8_george_aggr
online

Type
Size Available Used%
---- ---------- ---------- ----RW

20MB

4.06MB

79%

DP

20MB

4.05MB

79%

vgeorge
vgeorge
vjames
vjames

root_dp2
root_ls2
root_dp1
root_dp2

node7_aggr1 online
node7_aggr1 online
node7_aggr1 online
node8_george_aggr
online
vjames
root_ls1
node7_aggr1 online
vjames
root_ls2
node8_george_aggr
online
8 entries were displayed.

DP
LS
DP

20MB
20MB
20MB

4.07MB
4.05MB
7.46MB

79%
79%
62%

DP
RW

20MB
20MB

7.46MB
7.46MB

62%
62%

LS

20MB

7.45MB

62%

node7::> snapmirror show


Source Path
Type Destination Path
------------------------------------ ------ ----------------------------------cluster_node78://vgeorge/root
DP
cluster_node78://vgeorge/root_dp1
cluster_node78://vgeorge/root_dp2
LS
cluster_node78://vgeorge/root_ls2
cluster_node78://vjames/root_ls1
DP
cluster_node78://vjames/root_dp1
cluster_node78://vjames/root_dp2
LS
cluster_node78://vjames/root_ls2
6 entries were displayed.
node7::> snapmirror update-ls-set -source-path cluster clu
ERROR: the value "clu" is invalid for type <true|false>
node7::> snapmirror update-ls-set -source-path cluster_node78://v
cluster_node78://vgeorge/<volume> cluster_node78://vjames/<volume>
node7::> snapmirror update-ls-set -source-path cluster_node78://vjames/root_ls1
-foreground true
[Job 58] Job is queued: snapmirror update-ls-set for source cluster_node78://vja
[Job 58] 0.00B sent for 0 of 1 Snapshot copies, transferring Snapshot copy snapm
[Job 58] Job succeeded: SnapMirror: done
node7::> vol snap show -volume volume1
(volume snapshot show)
---Blocks--Vserver Volume Snapshot
Size Total% Used%
-------- ------- ---------------------------------- ------------ ------ ----vgeorge volume1
daily.2010-12-02_0010
44KB
0% 30%
hourly.2010-12-02_1405
52KB
0% 33%
hourly.2010-12-02_1505
52KB
0% 33%
hourly.2010-12-02_1605
52KB
0% 33%
hourly.2010-12-02_1705
52KB
0% 33%
hourly.2010-12-02_1805
52KB
0% 33%
snapshot1
52KB
0% 33%
hourly.2010-12-02_1905
52KB
0% 33%
5min.2010-12-02_1930
52KB
0% 33%
5min.2010-12-02_1935
52KB
0% 33%
5min.2010-12-02_1940
52KB
0% 33%
5min.2010-12-02_1945
40KB
0% 28%
vjames volume1
daily.2010-12-02_0010
44KB
0% 30%
hourly.2010-12-02_1405
52KB
0% 33%
hourly.2010-12-02_1505
52KB
0% 33%
hourly.2010-12-02_1605
52KB
0% 33%
hourly.2010-12-02_1705
52KB
0% 33%
---Blocks--Vserver Volume Snapshot
Size Total% Used%
-------- ------- ---------------------------------- ------------ ------ -----

vjames

volume1

hourly.2010-12-02_1805
hourly.2010-12-02_1905
5min.2010-12-02_1935
5min.2010-12-02_1940
5min.2010-12-02_1945
22 entries were displayed.

52KB
52KB
52KB
52KB
52KB

0%
0%
0%
0%
0%

33%
33%
33%
33%
33%

node7::> volume snapshot create -vserver vjames -volume volume1 -snapshot shapsh
ot1
[Job 59] Job succeeded: Successful
node7::> vol snap show -volume volume1
(volume snapshot show)
---Blocks--Vserver Volume Snapshot
Size Total% Used%
-------- ------- ---------------------------------- ------------ ------ ----vgeorge volume1
daily.2010-12-02_0010
44KB
0% 30%
hourly.2010-12-02_1405
52KB
0% 33%
hourly.2010-12-02_1505
52KB
0% 33%
hourly.2010-12-02_1605
52KB
0% 33%
hourly.2010-12-02_1705
52KB
0% 33%
hourly.2010-12-02_1805
52KB
0% 33%
snapshot1
52KB
0% 33%
hourly.2010-12-02_1905
52KB
0% 33%
5min.2010-12-02_1935
52KB
0% 33%
5min.2010-12-02_1940
52KB
0% 33%
5min.2010-12-02_1945
52KB
0% 33%
vjames volume1
daily.2010-12-02_0010
44KB
0% 30%
hourly.2010-12-02_1405
52KB
0% 33%
hourly.2010-12-02_1505
52KB
0% 33%
hourly.2010-12-02_1605
52KB
0% 33%
hourly.2010-12-02_1705
52KB
0% 33%
hourly.2010-12-02_1805
52KB
0% 33%
Press <space> to page down, <return> for next line, or 'q' to quit...
17 entries were displayed.
node7::> volume snapshot policy show
Number Of Is
Name
Schedules Enabled
----------------- ---------- ------default
4 true
schedules.
Schedule: hourly
Count:
daily
weekly
5min
none
0 false
Schedule: Count:
2 entries were displayed.

Comment
----------------------------------------Default policy with hourly, daily & weekly
6
2
2
3
Policy for no automatic snapshots.
-

node7::> volume snapshot policy add-schedule -policy default -schedule 5min -cou
nt 3
ERROR: command failed: Schedule already exists in snapshot policy.
node7::> volume snapshot policy add-schedule -policy default -schedule 5min -cou
nt 3

ERROR: command failed: Schedule already exists in snapshot policy.


node7::> volume snapshot policy show
Number Of Is
Name
Schedules Enabled
----------------- ---------- ------default
4 true
schedules.
Schedule: hourly
Count:
daily
weekly
5min
none
0 false
Schedule: Count:
2 entries were displayed.

Comment
----------------------------------------Default policy with hourly, daily & weekly
6
2
2
3
Policy for no automatic snapshots.
-

node7::> volume rename -vserver vjames -volume root_ls1 -newname root


[Job 60] Job is queued: Rename root_ls1 to root.Renaming volume root_ls1 (fsid 4
014c1a) to root: start time 78802335
[Job 60] Job succeeded: Successful
node7::> volume snapshot show -volume root
---Blocks--Vserver Volume Snapshot
Size Total% Used%
-------- ------- ---------------------------------- ------------ ------ ----vgeorge root
daily.2010-12-02_0010
76KB
0%
1%
hourly.2010-12-02_1305
80KB
0%
1%
hourly.2010-12-02_1405
80KB
0%
1%
hourly.2010-12-02_1505
80KB
0%
1%
hourly.2010-12-02_1605
80KB
0%
1%
hourly.2010-12-02_1705
80KB
0%
1%
snapmirror.3_2147484680.2010-12-02_173142
0%
1%
100KB
hourly.2010-12-02_1805
52KB
0%
0%
hourly.2010-12-02_1905
44KB
0%
0%
snapmirror.3_2147484679.2010-12-02_190501
0%
0%
52KB
5min.2010-12-02_1940
52KB
0%
0%
5min.2010-12-02_1945
52KB
0%
0%
5min.2010-12-02_1950
52KB
0%
0%
snapmirror.3_2147484673.2010-12-02_195004
0%
0%
52KB
vjames root
daily.2010-12-02_0010
76KB
0%
1%
---Blocks--Vserver Volume Snapshot
Size Total% Used%
-------- ------- ---------------------------------- ------------ ------ ----vjames root
hourly.2010-12-02_1405
80KB
0%
1%
hourly.2010-12-02_1505
80KB
0%
1%
hourly.2010-12-02_1605
80KB
0%
1%
hourly.2010-12-02_1705
80KB
0%
1%
snapmirror.4_2147484684.2010-12-02_174847
0%
1%
80KB
snapmirror.4_2147484685.2010-12-02_175434
0%
1%
96KB
hourly.2010-12-02_1805
80KB
0%
1%

hourly.2010-12-02_1905
5min.2010-12-02_1940
5min.2010-12-02_1945
5min.2010-12-02_1950
snapmirror.4_2147484675.2010-12-02_195001

104KB
92KB
52KB
52KB

1%
0%
0%
0%
0%

1%
1%
1%
1%
1%

52KB
27 entries were displayed.
node7::> set -priv advanced
Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y
node7::*> volume snapshot promote -vserver vjames -volume root -snapshot hourly.
2010-12-02_1805
WARNING: One or more Snapshot copies with newer data versions exist for this
volume. Promoting this Snapshot copy will delete all later Snapshot
copies.
Do you want to continue? {y|n}: y
WARNING: Quota rules currently enforced on volume root may change during this
operation. If the currently enforced quota rules are different from
those in Snapshot copy hourly.2010-12-02_1805, you may have to resize
or reinitialize quotas on this volume after this operation.
Do you want to continue? {y|n}: y
[Job 61] Job is queued: Promote hourly.2010-12-02_1805 to root.
ERROR: command failed: [Job 61] Job failed: Failed to promote Snapshot copy
'hourly.2010-12-02_1805' because one or more newer Snapshot copies are
currently used as a reference Snapshot copy for data protection
operations: snapmirror.4_2147484675.2010-12-02_195001.
node7::*> system license ?
add
delete
show
node7::*> system license show
Feature
Cluster SN
--------------- ----------Base
1-80-123456
ize limit (nodes)
CIFS
1-80-123456
SnapRestore
1-80-123456
NFS
1-80-123456
SnapMirror_DP
1-80-123456
on License
Striped_Volume 1-80-123456
6 entries were displayed.

Add a feature license


Delete a feature license
Display feature licenses
Limit
License Code
Description
------- --------------- ----------666
NWUZZJPJYBFDAA Base License w/cluster s
666
666
666
666

JAMHCKPJYBFDAA
VUJWCKPJYBFDAA
TJFAEKPJYBFDAA
FEDPEKPJYBFDAA

CIFS License
SnapRestore License
NFS License
SnapMirror Data Protecti

666

DTYSFKPJYBFDAA

Striped Volume License

node7::*> volume snapshot show -volume root


---Blocks--Vserver Volume Snapshot
Size Total% Used%
-------- ------- ---------------------------------- ------------ ------ ----vgeorge root
daily.2010-12-02_0010
76KB
0%
1%
hourly.2010-12-02_1305
80KB
0%
1%
hourly.2010-12-02_1405
80KB
0%
1%

vjames

hourly.2010-12-02_1505
80KB
hourly.2010-12-02_1605
80KB
hourly.2010-12-02_1705
80KB
snapmirror.3_2147484680.2010-12-02_173142
100KB
hourly.2010-12-02_1805
52KB
hourly.2010-12-02_1905
44KB
snapmirror.3_2147484679.2010-12-02_190501
52KB
5min.2010-12-02_1940
52KB
5min.2010-12-02_1945
52KB
5min.2010-12-02_1950
52KB
snapmirror.3_2147484673.2010-12-02_195004
52KB

0%
0%
0%
0%

1%
1%
1%
1%

0%
0%
0%

0%
0%
0%

0%
0%
0%
0%

0%
0%
0%
0%

daily.2010-12-02_0010

0%

1%

root
76KB

---Blocks--Vserver Volume Snapshot


Size Total% Used%
-------- ------- ---------------------------------- ------------ ------ ----vjames root
hourly.2010-12-02_1405
80KB
0%
1%
hourly.2010-12-02_1505
80KB
0%
1%
hourly.2010-12-02_1605
80KB
0%
1%
hourly.2010-12-02_1705
80KB
0%
1%
snapmirror.4_2147484684.2010-12-02_174847
0%
1%
80KB
snapmirror.4_2147484685.2010-12-02_175434
0%
1%
96KB
hourly.2010-12-02_1805
80KB
0%
1%
hourly.2010-12-02_1905
104KB
1%
1%
5min.2010-12-02_1940
92KB
0%
1%
5min.2010-12-02_1945
52KB
0%
1%
5min.2010-12-02_1950
52KB
0%
1%
snapmirror.4_2147484675.2010-12-02_195001
0%
1%
52KB
27 entries were displayed.
node7::*> set admin
node7::> date
node7::system node date> set admin
node7::system node date> ..
node7::system node> ..
node7::system> system services ndmp show
Node
Enabled Clear text
--------------------- --------- ----------node7
true
true
node8
true
true
2 entries were displayed.

User Id
--------root
root

Password
-------admin
admin

node7::system> system services ndmp modify -node node7 -enable true -user-id ndm
puser -password ndmppass
node7::system> system node hardware tape drive show
This table is currently empty.

node7::system> system node hardware tape library show


This table is currently empty.
node7::system> cluster show
Node
Health
--------------------- ------node7
true
node8
true
2 entries were displayed.

Eligibility
-----------true
true

node7::system> set -priv advanced


Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by NetApp personnel.
Do you want to continue? {y|n}: y
node7::system*> cluster ring show
Node
UnitName Epoch
DB Epoch
--------- -------- -------- -------node7
mgmt
4
4
node7
vldb
4
4
node7
vifmgr 4
4
node8
mgmt
4
4
node8
vldb
4
4
node8
vifmgr 4
4
6 entries were displayed.

DB Trnxs
-------229
142
47
229
142
47

Master
--------node8
node8
node8
node8
node8
node8

node7::system*> cluster ping-cluster -node node7


Host is node7
Getting information from ngsh
Local = 192.168.150.25 192.168.150.26
Remote = 192.168.150.27 192.168.150.28
Ping status:
4 paths up, 0 paths down at 1500 size
4 paths up, 0 paths down at 4500 size
4 paths up, 0 paths down at 9000 size
RPC status:
2 paths up, 0 paths down (tcp check)
2 paths up, 0 paths down (udp check)
node7::system*> cluster ping-cluster -node node8
Host is node8
Getting information from ngsh
Local = 192.168.150.27 192.168.150.28
Remote = 192.168.150.25 192.168.150.26
Ping status:
4 paths up, 0 paths down at 1500 size
4 paths up, 0 paths down at 4500 size
4 paths up, 0 paths down at 9000 size
RPC status:
2 paths up, 0 paths down (tcp check)
2 paths up, 0 paths down (udp check)
node7::system*> set admin
node7::system> stor aggr sow

ERROR: "sow" is not a recognized command


node7::system> stor aggr show
(storage aggregate show)
Aggregate
Size Available Used%
--------- -------- --------- ----aggr0
56.76GB
2.59GB 95%
aggr0_node8
56.76GB
2.59GB 95%
node7_aggr1
56.76GB 56.64GB
0%
node8_george_aggr
170.3GB 170.2GB
0%
4 entries were displayed.

State
#Vols Nodes
RAID Status
------- ------ ---------------- -----------online
1 node7
raid_dp
online

1 node8

raid_dp

online

6 node7

raid_dp

online

6 node8

raid_dp

node7::system> stor aggr show -state


(storage aggregate show)
ERROR: Missing value for -state. If you want to only display certain fields,
please use -fields <list of fields>.
node7::system> stor aggr show -state ?
(storage aggregate show)
online
creating
mounting
quiesced
quiescing
unmounted
unmounting
destroying
partial
frozen
reverted
restricted
inconsistent
iron_restricted
unknown
offline
failed
node7::system> stor aggr show -state !online
(storage aggregate show)
There are no entries matching your query.
node7::system> stor disk show
(storage disk show)
Disk
UsedSize(MB) Shelf Bay State
---------------- ------------ ----- --- --------node7:0a.112
68000
7 0 present
node7:0a.113
68000
7 1 present
node7:0a.114
68000
7 2 present
node7:0a.115
68000
7 3 present

RAID Type
---------dparity
parity
data
dparity

node7:0a.116

68000

4 present

parity

node7:0a.117

68000

5 present

data

node7:0a.118
node7:0a.119

68000
68000

7
7

6 spare
7 spare

pending
pending

Aggregate Owner
--------- -------aggr0
node7
aggr0
node7
aggr0
node7
node7_aggr1
node7
node7_aggr1
node7
node7_aggr1
node7
node7
node7

node7:0a.120
node7:0a.121
node7:0a.122
node7:0a.123
node7:0a.124
node7:0a.125
node7:0c.112
node7:0c.113
node7:0c.114

68000
68000
68000
68000
68000
68000
0
0
0

7
7
7
7
7
7
7
7
7

8
9
10
11
12
13
0
1
2

spare
spare
spare
spare
spare
spare
partner
partner
partner

pending
pending
pending
pending
pending
pending
-

Disk
UsedSize(MB) Shelf Bay State
---------------- ------------ ----- --- --------node7:0c.115
0
7 3 partner
node7:0c.116
0
7 4 partner
node7:0c.117
0
7 5 partner
node7:0c.118
0
7 6 partner
node7:0c.119
0
7 7 partner
node7:0c.120
0
7 8 partner
node7:0c.121
0
7 9 partner
node7:0c.122
0
7 10 partner
node7:0c.123
0
7 11 partner
node7:0c.124
0
7 12 partner
node7:0c.125
0
7 13 partner
node8:0a.112
68000
7 0 present

RAID Type
---------dparity

node8:0a.113

68000

1 present

parity

node8:0a.114

68000

2 present

data

node8:0a.115
node8:0a.116
node8:0a.117

68000
68000
68000

7
7
7

3 spare
4 spare
5 spare

pending
pending
pending

Aggregate Owner
--------- -------node8
node8
node8
node8
node8
node8
node8
node8
node8
node8
node8
aggr0_node8
node8
aggr0_node8
node8
aggr0_node8
node8
node8
node8
node8

Disk
UsedSize(MB) Shelf Bay State
RAID Type
---------------- ------------ ----- --- --------- ---------node8:0a.118
68000
7 6 spare
pending
node8:0a.119
68000
7 7 spare
pending
node8:0a.120
68000
7 8 present dparity
node8:0a.121

68000

9 present

parity

node8:0a.122

68000

7 10 present

data

node8:0a.123

68000

7 11 present

data

node8:0a.124

68000

7 12 present

data

node8:0a.125
node8:0c.112
node8:0c.113
node8:0c.114
node8:0c.115
node8:0c.116
node8:0c.117
node8:0c.118

68000
0
0
0
68000
68000
68000
0

7 13 spare
7 0 partner
7 1 partner
7 2 partner
7 3 partner
7 4 partner
7 5 partner
7 6 partner

pending
-

Disk
UsedSize(MB) Shelf Bay State
RAID Type
---------------- ------------ ----- --- --------- ---------node8:0c.119
0
7 7 partner node8:0c.120
0
7 8 partner -

node7
node7
node7
node7
node7
node7
node8
node8
node8

Aggregate Owner
--------- -------node8
node8
node8_george_aggr
node8
node8_george_aggr
node8
node8_george_aggr
node8
node8_george_aggr
node8
node8_george_aggr
node8
node8
node7
node7
node7
node7
node7
node7
node7
Aggregate
---------

Owner
-------node7
node7

node8:0c.121
node8:0c.122
node8:0c.123
node8:0c.124
node8:0c.125
56 entries were displayed.

0
0
0
0
0

7
7
7
7
7

9
10
11
12
13

partner
partner
partner
partner
partner

node7
node7
node7
node7
node7

node7::system> stor disk show -state ?


(storage disk show)
partner
broken
zeroing
spare
copy
pending
reconstructing
present
removed
unfail
node7::system> stor disk show -state broken
(storage disk show)
There are no entries matching your query.
node7::system> vol show
(volume show)
Virtual
Server
Volume
Aggregate
State
--------- ------------ ------------ ---------node7
vol0
aggr0
online
node8
vol0
aggr0_node8 online
vgeorge root
node8_george_aggr
online
vgeorge root_dp1
node8_george_aggr
online
vgeorge root_dp2
node7_aggr1 online
vgeorge root_ls2
node7_aggr1 online
vgeorge volume1
node8_george_aggr
online
vgeorge volume3
node8_george_aggr
online
vjames
root
node7_aggr1 online
vjames
root_dp1
node7_aggr1 online
vjames
root_dp2
node8_george_aggr
online
vjames
root_ls2
node8_george_aggr
online
vjames
volume1
node7_aggr1 online

Type
Size Available Used%
---- ---------- ---------- ----RW
53.87GB
41.88GB 22%
RW
53.87GB
41.87GB 22%
RW

20MB

4.06MB

79%

DP
DP
LS

20MB
20MB
20MB

4.05MB
4.07MB
4.05MB

79%
79%
79%

RW

20MB

15.90MB

20%

RW
RW
DP

20MB
20MB
20MB

15.90MB
7.46MB
7.46MB

20%
62%
62%

DP

20MB

7.46MB

62%

LS
RW

20MB
20MB

7.45MB
15.90MB

62%
20%

Virtual
Server
Volume
Aggregate
State
Type
Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- ----vjames
volume3
node7_aggr1 online
RW
20MB
15.90MB 20%
14 entries were displayed.
node7::system> vol show -state ?
(volume show)
online
restricted
offline

force-online
node7::system> vol show -state !online
(volume show)
There are no entries matching your query.
node7::system> security login show
UserName
Application
--------------------- ----------admin
console
admininistrator
http
administrator
http
administrator
ssh
public
snmp
5 entries were displayed.

Authentication
Method
-------------password
password
password
password
community

Role Name
-------------------admin
admin
admin
admin
readonly

Acct
Locked
-----no
no
no
no
-

node7::system> security loign show -username diag


ERROR: "loign" is not a recognized command
node7::system> security login show -username diag
Authentication
UserName
Application Method
Role Name
--------------------- ----------- -------------- -------------------diag
console
password
admin

Acct
Locked
-----no

node7::system> set diag


Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
node7::system*> system node systemshell node7
Data ONTAP/amd64 (node7) (ttyp0)
login: diage
Password:
Login incorrect
login: diag
Password:
Login incorrect
login:
login:
login: root
Password:
Login incorrect
login:
node7::system*> security login sh
UserName
Application
--------------------- ----------admin
console
admininistrator
http
administrator
http
administrator
ssh
public
snmp
5 entries were displayed.

Authentication
Method
-------------password
password
password
password
community

Role Name
-------------------admin
admin
admin
admin
readonly

Acct
Locked
-----no
no
no
no
-

node7::system*> security login sh -username diag


Authentication
UserName
Application Method
Role Name
--------------------- ----------- -------------- -------------------diag
console
password
admin

Acct
Locked
-----no

node7::system*> security login show -username diag


Authentication
UserName
Application Method
Role Name
--------------------- ----------- -------------- -------------------diag
console
password
admin

Acct
Locked
-----no

node7::system*> security login password -username diag


Please enter a new password:
Please enter it again:
ERROR: Passwords didn't match.
node7::system*> security login password -username diag
Please enter a new password:
Please enter it again:
ERROR: New password must be at least 8 characters long.
node7::system*> security login password -username diag
Please enter a new password:
Please enter it again:
node7::system*> set -priv diag
node7::system*> systemshell
(system node systemshell)
Usage:
[-node] <nodename>
[[-command] <text>]

Node
*Command to run

node7::system*> systemshell -node local


(system node systemshell)
Data ONTAP/amd64 (node7) (ttyp0)
login: diag
Password:
WARNING: The system shell provides access to low-level
diagnostic tools that can cause irreparable damage to
the system if not used properly. Use this environment
only when directed to do so by support personnel.
node7% rdb_dump
Local time Thu Dec 2 20:14:30 2010
RDB Unit "Management" (id 1) on host "node7" (site 1000)
At Thu Dec 2 20:14:30 2010.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <4,237>

Online Status:
Local 1000 is Secondary (epoch: 4, master: 1001)
1. id 1000, state: online (local)
2. id 1001, state: online *** Master
RDB Unit "VifMgr" (id 2) on host "node7" (site 1000)
At Thu Dec 2 20:14:30 2010.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <4,47>
Online Status:
Local 1000 is Secondary (epoch: 4, master: 1001)
1. id 1000, state: online (local)
2. id 1001, state: online *** Master
RDB Unit "VLDB" (id 0) on host "node7" (site 1000)
At Thu Dec 2 20:14:30 2010.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <4,148>
Online Status:
Local 1000 is Secondary (epoch: 4, master: 1001)
1. id 1000, state: online (local)
2. id 1001, state: online *** Master
------------node7% rdb_dump -c vldb
Local time Thu Dec 2 20:15:00 2010
RDB Unit "VLDB" (id 0) on host "node7" (site 1000)
At Thu Dec 2 20:15:00 2010.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <4,148>
Online Status:
Local 1000 is Secondary (epoch: 4, master: 1001)
1. id 1000, state: online (local)
2. id 1001, state: online *** Master
------------Local time Thu Dec 2 20:15:03 2010
RDB Unit "VLDB" (id 0) on host "node7" (site 1000)
At Thu Dec 2 20:15:03 2010.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <4,148>
Online Status:
Local 1000 is Secondary (epoch: 4, master: 1001)
1. id 1000, state: online (local)
2. id 1001, state: online *** Master
------------Local time Thu Dec 2 20:15:06 2010
RDB Unit "VLDB" (id 0) on host "node7" (site 1000)
At Thu Dec 2 20:15:06 2010.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <4,148>
Online Status:
Local 1000 is Secondary (epoch: 4, master: 1001)
1. id 1000, state: online (local)
2. id 1001, state: online *** Master

------------Local time Thu Dec 2 20:15:09 2010


RDB Unit "VLDB" (id 0) on host "node7" (site 1000)
At Thu Dec 2 20:15:09 2010.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <4,149>
Online Status:
Local 1000 is Secondary (epoch: 4, master: 1001)
1. id 1000, state: online (local)
2. id 1001, state: online *** Master
------------Local time Thu Dec 2 20:15:12 2010
RDB Unit "VLDB" (id 0) on host "node7" (site 1000)
At Thu Dec 2 20:15:12 2010.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <4,150>
Online Status:
Local 1000 is Secondary (epoch: 4, master: 1001)
1. id 1000, state: online (local)
2. id 1001, state: online *** Master
------------Local time Thu Dec 2 20:15:15 2010
RDB Unit "VLDB" (id 0) on host "node7" (site 1000)
At Thu Dec 2 20:15:15 2010.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <4,150>
Online Status:
Local 1000 is Secondary (epoch: 4, master: 1001)
1. id 1000, state: online (local)
2. id 1001, state: online *** Master
------------Local time Thu Dec 2 20:15:18 2010
RDB Unit "VLDB" (id 0) on host "node7" (site 1000)
At Thu Dec 2 20:15:18 2010.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <4,150>
Online Status:
Local 1000 is Secondary (epoch: 4, master: 1001)
1. id 1000, state: online (local)
2. id 1001, state: online *** Master
------------Local time Thu Dec 2 20:15:21 2010
RDB Unit "VLDB" (id 0) on host "node7" (site 1000)
At Thu Dec 2 20:15:21 2010.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <4,150>
Online Status:
Local 1000 is Secondary (epoch: 4, master: 1001)
1. id 1000, state: online (local)
2. id 1001, state: online *** Master

------------Local time Thu Dec 2 20:15:24 2010


RDB Unit "VLDB" (id 0) on host "node7" (site 1000)
At Thu Dec 2 20:15:24 2010.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <4,150>
Online Status:
Local 1000 is Secondary (epoch: 4, master: 1001)
1. id 1000, state: online (local)
2. id 1001, state: online *** Master
------------Local time Thu Dec 2 20:15:27 2010
RDB Unit "VLDB" (id 0) on host "node7" (site 1000)
At Thu Dec 2 20:15:27 2010.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <4,150>
Online Status:
Local 1000 is Secondary (epoch: 4, master: 1001)
1. id 1000, state: online (local)
2. id 1001, state: online *** Master
------------Local time Thu Dec 2 20:15:30 2010
RDB Unit "VLDB" (id 0) on host "node7" (site 1000)
At Thu Dec 2 20:15:30 2010.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <4,150>
Online Status:
Local 1000 is Secondary (epoch: 4, master: 1001)
1. id 1000, state: online (local)
2. id 1001, state: online *** Master
------------Local time Thu Dec 2 20:15:33 2010
RDB Unit "VLDB" (id 0) on host "node7" (site 1000)
At Thu Dec 2 20:15:33 2010.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <4,150>
Online Status:
Local 1000 is Secondary (epoch: 4, master: 1001)
1. id 1000, state: online (local)
2. id 1001, state: online *** Master
------------Local time Thu Dec 2 20:15:36 2010
RDB Unit "VLDB" (id 0) on host "node7" (site 1000)
At Thu Dec 2 20:15:36 2010.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <4,150>
Online Status:
Local 1000 is Secondary (epoch: 4, master: 1001)
1. id 1000, state: online (local)
2. id 1001, state: online *** Master

------------Local time Thu Dec 2 20:15:39 2010


RDB Unit "VLDB" (id 0) on host "node7" (site 1000)
At Thu Dec 2 20:15:39 2010.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <4,150>
Online Status:
Local 1000 is Secondary (epoch: 4, master: 1001)
1. id 1000, state: online (local)
2. id 1001, state: online *** Master
------------Local time Thu Dec 2 20:15:42 2010
RDB Unit "VLDB" (id 0) on host "node7" (site 1000)
At Thu Dec 2 20:15:42 2010.
App Version: <1,1>, RDB Version: <2,0>, DBSet Version: <4,152>
Online Status:
Local 1000 is Secondary (epoch: 4, master: 1001)
1. id 1000, state: online (local)
2. id 1001, state: online *** Master
------------^C
node7% cd /mroot/etc/log/mlog
node7% ls -ltr
total 14108
-rw-r--r-- 2 root wheel
1807
-rw-r--r-- 2 root wheel
1807
-rw-r--r-- 2 root wheel
2327
-rw-r--r-- 2 root wheel
2327
-rw-r--r-- 2 root wheel
5231
-rw-r--r-- 2 root wheel
5231
-rw-r--r-- 2 root wheel 105957
-rw-r--r-- 2 root wheel 105957
-rw-r--r-- 2 root wheel
57778
-rw-r--r-- 2 root wheel
57778
-rw-r--r-- 2 root wheel
34950
-rw-r--r-- 2 root wheel
34950
-rw-r--r-- 2 root wheel
3899
-rw-r--r-- 2 root wheel
3899
-rw-r--r-- 2 root wheel
270
-rw-r--r-- 2 root wheel
270
-rw-r--r-- 2 root wheel
31963
-rw-r--r-- 2 root wheel
31963
-rw-r--r-- 2 root wheel
58600
-rw-r--r-- 2 root wheel
58600
-rw-r--r-- 2 root wheel
238
-rw-r--r-- 2 root wheel
238
-rw-r--r-- 2 root wheel
1924
-rw-r--r-- 2 root wheel
1924
-rw-r--r-- 2 root wheel
6991
-rw-r--r-- 2 root wheel
6991
-rw-r--r-- 2 root wheel
30341
-rw-r--r-- 2 root wheel
30341
-rw-r--r-- 2 root wheel
5982
-rw-r--r-- 2 root wheel
5982

Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov
Nov

28
28
28
28
28
28
28
28
29
29
29
29
29
29
29
29
30
30
30
30
30
30
30
30
30
30
30
30
30
30

15:50
15:50
15:50
15:50
15:50
15:50
16:01
16:01
00:00
00:00
00:22
00:22
00:23
00:23
00:31
00:31
00:00
00:00
00:11
00:11
22:29
22:29
22:29
22:29
22:29
22:29
22:31
22:31
23:53
23:53

spmd.log.previous.3
spmd.log.0000000001
command-history.log.previous.3
command-history.log.0000000001
debug.log.previous.5
debug.log.0000000001
mgwd.log.previous.5
mgwd.log.0000000001
notifyd.log.previous.5
notifyd.log.0000000001
messages.log.previous.5
messages.log.0000000001
debug.log.previous.4
debug.log.0000000002
mgwd.log.previous.4
mgwd.log.0000000002
notifyd.log.previous.4
notifyd.log.0000000002
messages.log.previous.4
messages.log.0000000002
spmd.log.previous.2
spmd.log.0000000002
secd.log.previous.2
secd.log.0000000001
debug.log.previous.3
debug.log.0000000003
vldb.log.previous.2
vldb.log.0000000001
ndmpd.log.previous.3
ndmpd.log.0000000001

-rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r-drwxr-xr-x
drwxr-xr-x
-rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rwxr-xr-x
-rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--

2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
2
2
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
2
2
2
2
2

root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root
root

wheel
51475 Dec 1 00:00 notifyd.log.previous.3
wheel
51475 Dec 1 00:00 notifyd.log.0000000003
wheel
11412 Dec 1 00:05 command-history.log.previous.2
wheel
11412 Dec 1 00:05 command-history.log.0000000002
wheel 257734 Dec 1 00:05 mgwd.log.previous.3
wheel 257734 Dec 1 00:05 mgwd.log.0000000003
wheel
73312 Dec 1 00:10 messages.log.previous.3
wheel
73312 Dec 1 00:10 messages.log.0000000003
wheel 227346 Dec 1 00:10 vifmgr.log.previous.3
wheel 227346 Dec 1 00:10 vifmgr.log.0000000001
wheel
3899 Dec 1 00:10 debug.log.previous.2
wheel
3899 Dec 1 00:10 debug.log.0000000004
wheel
5956 Dec 1 07:29 mgwd.log.previous.2
wheel
5956 Dec 1 07:29 mgwd.log.0000000004
wheel
25493 Dec 1 08:00 notifyd.log.previous.2
wheel
25493 Dec 1 08:00 notifyd.log.0000000004
wheel
21754 Dec 1 08:05 messages.log.previous.2
wheel
21754 Dec 1 08:05 messages.log.0000000004
wheel 671590 Dec 1 08:06 vifmgr.log.previous.2
wheel 671590 Dec 1 08:06 vifmgr.log.0000000002
wheel
25886 Dec 1 08:06 ndmpd.log.previous.2
wheel
25886 Dec 1 08:06 ndmpd.log.0000000002
wheel
89 Dec 1 18:40 jm-restart.log.old
wheel
4096 Dec 1 22:01 dead_logs
wheel
4096 Dec 1 22:01 var_dead_logs
wheel
89 Dec 1 22:01 jm-restart.log
wheel
6057 Dec 1 22:01 spmd.log.previous.1
wheel
6057 Dec 1 22:01 spmd.log.0000000003
wheel
72138 Dec 1 22:03 vldb.log.previous.1
wheel
72138 Dec 1 22:03 vldb.log.0000000002
wheel
43168 Dec 1 23:18 secd.log.previous.1
wheel
43168 Dec 1 23:18 secd.log.0000000002
wheel
26647 Dec 1 23:41 debug.log.previous.1
wheel
26647 Dec 1 23:41 debug.log.0000000005
wheel
21799 Dec 2 02:20 command-history.log.previous.1
wheel
21799 Dec 2 02:20 command-history.log.0000000003
wheel 970585 Dec 2 07:02 mgwd.log.previous.1
wheel 970585 Dec 2 07:02 mgwd.log.0000000005
wheel
70840 Dec 2 07:56 ndmpd.log.previous.1
wheel
70840 Dec 2 07:56 ndmpd.log.0000000003
wheel 280905 Dec 2 08:00 notifyd.log.previous.1
wheel 280905 Dec 2 08:00 notifyd.log.0000000005
wheel 2274834 Dec 2 08:04 vifmgr.log.previous.1
wheel 2274834 Dec 2 08:04 vifmgr.log.0000000003
wheel 131149 Dec 2 08:05 messages.log.previous.1
wheel 131149 Dec 2 08:05 messages.log.0000000005
wheel
0 Dec 2 08:06 spmd.log.0000000004
wheel
0 Dec 2 08:06 spmd.log
wheel
0 Dec 2 08:06 secd.log.0000000003
wheel
0 Dec 2 08:06 secd.log
wheel
7645 Dec 2 19:40 debug.log.0000000006
wheel
7645 Dec 2 19:40 debug.log
wheel
6420 Dec 2 19:41 vldb.log.0000000003
wheel
6420 Dec 2 19:41 vldb.log
wheel
11008 Dec 2 19:54 joblog.bin
wheel
37364 Dec 2 20:04 notifyd.log.0000000006
wheel
37364 Dec 2 20:04 notifyd.log
wheel
37642 Dec 2 20:09 ndmpd.log.0000000004
wheel
37642 Dec 2 20:09 ndmpd.log
wheel
20112 Dec 2 20:14 command-history.log.0000000004

-rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r-node7% exit
logout

2
2
2
2
2
2
2

root
root
root
root
root
root
root

wheel
20112 Dec 2 20:14
wheel
46047 Dec 2 20:15
wheel
46047 Dec 2 20:15
wheel 1168960 Dec 2 20:15
wheel 1168960 Dec 2 20:15
wheel
84408 Dec 2 20:15
wheel
84408 Dec 2 20:15

command-history.log
messages.log.0000000006
messages.log
vifmgr.log.0000000004
vifmgr.log
mgwd.log.0000000006
mgwd.log

node7::system*> stat show -category volume


(system node platform ifswitch stat)
ERROR: "show" is an invalid value for field "-node <nodename>"
node7::system*> stat show -ca
ERROR: "show" is an invalid value for field "-node <nodename>"
node7::system*> stat show -category volume
(system node platform ifswitch stat)
ERROR: "show" is an invalid value for field "-node <nodename>"
node7::system*> stat show ?
(system node platform ifswitch stat)
ERROR: "show" is an invalid value for field "-node <nodename>"
node7::system*> stat show ?
(system node platform ifswitch stat)
ERROR: "show" is an invalid value for field "-node <nodename>"
node7::system*> stat show ?
(system node platform ifswitch stat)
ERROR: "show" is an invalid value for field "-node <nodename>"
node7::system*> stat?
(system node platform ifswitch stat)
status>
Display service status
status
Display kernel coredump status
stat
*Show Stats
stats
*Show Serial Interface Statistics
node7::system*> stat ?
(system node platform ifswitch stat)
[ -instance | -fields <fieldname>, ... ]
[[-node] <nodename>]
*Node
[[-port] {sw-RJ45|sw-RLM|sw-PartnerSwitch|sw-e0M}]
*Port
[ -rx-good <integer> ]
*Rx Good Frames
[ -rx-bad <integer> ]
*Rx Bad Frames
[ -rx-discards <integer> ]
*Rx Discards
[ -rx-filtered <integer> ]
*Rx Filtered
[ -tx-frames <integer> ]
*Tx Frames
[ -tx-collisions <integer> ] *Tx Collisions
[ -status {down|up} ]
*Link Status
[ -media <text> ]
*Media

node7::system*> stat -node node7 ?


(system node platform ifswitch stat)
[ -instance | -fields <fieldname>, ... ]
[[-port] {sw-RJ45|sw-RLM|sw-PartnerSwitch|sw-e0M}]
*Port
[ -rx-good <integer> ]
*Rx Good Frames
[ -rx-bad <integer> ]
*Rx Bad Frames
[ -rx-discards <integer> ]
*Rx Discards
[ -rx-filtered <integer> ]
*Rx Filtered
[ -tx-frames <integer> ]
*Tx Frames
[ -tx-collisions <integer> ] *Tx Collisions
[ -status {down|up} ]
*Link Status
[ -media <text> ]
*Media
node7::system*> stat -node node7 -inst
(system node platform ifswitch stat)
There are no entries matching your query.
ERROR: Not supported on this platform
node7::system*> stat -node node7 -instance
(system node platform ifswitch stat)
There are no entries matching your query.
ERROR: Not supported on this platform
node7::system*> stat -node node7 ?
(system node platform ifswitch stat)
[ -instance | -fields <fieldname>, ... ]
[[-port] {sw-RJ45|sw-RLM|sw-PartnerSwitch|sw-e0M}]
*Port
[ -rx-good <integer> ]
*Rx Good Frames
[ -rx-bad <integer> ]
*Rx Bad Frames
[ -rx-discards <integer> ]
*Rx Discards
[ -rx-filtered <integer> ]
*Rx Filtered
[ -tx-frames <integer> ]
*Tx Frames
[ -tx-collisions <integer> ] *Tx Collisions
[ -status {down|up} ]
*Link Status
[ -media <text> ]
*Media
node7::system*> stat -node node7 stat ?
(system node platform ifswitch stat)
ERROR: "stat" is an invalid value for field "-port
<sw-RJ45|sw-RLM|sw-PartnerSwitch|sw-e0M>"
node7::system*> stat -node node7
(system node platform ifswitch stat)
There are no entries matching your query.
ERROR: Not supported on this platform
node7::system*> stat show -category volume -object volume1
(system node platform ifswitch stat)
ERROR: "show" is an invalid value for field "-node <nodename>"
node7::system*> diag nblade cifs show-state
------------------------------------------------------

Global Log
-----------------------------------------------------0000 (0000000000 secs): CifsServer Initialized
0001 (0000000000 secs): CifsServer running in release mode
0002 (0000000000 secs): CifsServer Access Enabled
0003 (0000000000 secs): Smb2 Access Disabled
0004 (0000000000 secs): CifsServerState_t = 696896
0005 (0000000000 secs): SmbConnection_t = 1272
0006 (0000000000 secs): NameServerConnection_t = 2032
0007 (0000000000 secs): SmbCommand_t = 1128
0008 (0000000000 secs): VirtualServerTable_t = 275720
0009 (0000000000 secs): OutboundConnTable_t = 2880
0010 (0000000000 secs): VirtualInterfaceTable_t = 6152
0011 (0000000000 secs): ExternalStats_t = 2584
0012 (0000000000 secs): InternalStats_t = 600
0013 (0000000000 secs): SmbFileTable_t = 10485856
0014 (0000000000 secs): SmbFileEntry_t = 160
0015 (0000000000 secs): CallBackCmdContainer_t = 8264
0016 (0000000000 secs): CifsGlobalLog_t = 69648
0017 (0000000000 secs): ConnLog_t = 10260
0018 (0000000188 secs): ExportShareEntry_t = 4200
0019 (0000000197 secs): AddVirtualInterface with Id 1023 for VirServer 0 with Ip
Address = 0xc0a8961a as ClusterVif
0020 (0000000201 secs): AddVirtualInterface with Id 1024 for VirServer 0 with Ip
Address = 0xc0a89619 as ClusterVif
0021 (0000000316 secs): AddVirtualInterface with Id 1022 for VirServer 0 with Ip
Address = 0x0afe9011 as ClusterVif
0022 (0000004669 secs): AddVirtualInterface with Id 1026 for VirServer 4 with Ip
Address = 0x0afe901b as DataVif
0023 (0000004669 secs): SetVirtualServerMgmtInfo2 for VirServer 3 with name 10.2
54.144.28 and domain NAU01
0024 (0000004669 secs): InitializeNbtRegistration for Local VirServer 7.
0025 (0000004669 secs): AddExportShare admin$ for VirServer 3
0026 (0000004669 secs): AddExportShare ipc$ for VirServer 3
0027 (0000004669 secs): Modify VirServer 3 with flags 0x00007000
0028 (0000004669 secs): Modify Server HomeDir Enabled from 0 to 0
0029 (0000004669 secs): Modify Server HomeDir Public from 0 to 0
0030 (0000004669 secs): Modify Server HomeDir Admin Public from 0 to 0
0031 (0000000000 secs): Modify Server HomeDir paths: count(0)
0032 (0000004804 secs): Modify Server HomeDir patterns: count(0)
0033 (0000004935 secs): AddExportShare root for VirServer 3
0034 (0000005104 secs): AddExportShare rootsnaps for VirServer 3
0035 (0000005104 secs): SetVirtualServerMgmtInfo2 for VirServer 4 with name 10.2
54.144.27 and domain NAU01
0036 (0000005104 secs): InitializeNbtRegistration for Local VirServer 6.
0037 (0000005104 secs): AddExportShare admin$ for VirServer 4
0038 (0000005104 secs): AddExportShare ipc$ for VirServer 4
0039 (0000005104 secs): Modify VirServer 4 with flags 0x00007000
0040 (0000005104 secs): Modify Server HomeDir Enabled from 0 to 0
0041 (0000005104 secs): Modify Server HomeDir Public from 0 to 0
0042 (0000005104 secs): Modify Server HomeDir Admin Public from 0 to 0
0043 (0000005104 secs): Modify Server HomeDir paths: count(0)
0044 (0000005243 secs): Modify Server HomeDir patterns: count(0)
0045 (0000005437 secs): AddExportShare root for VirServer 4
0046 (0000000000 secs): AddExportShare rootsnaps for VirServer 4
------------------------------------------------------

node7::system*> diag nblade cifs server show

There are 2 Virtual Servers:


Virtual Server ID:
Server Type:
Name:
Domain Name:
MTAP:
Node Type:
WINS Servers:
Machine SID:
759
Local SID:
759-1656
Domain Controllers:
IP Addresses:
IP (0): 10.254.144.27

4
Kerberos
10.254.144.27
NAU01
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
B
Count = 0
S-1-000000000005-21-3842457870-236724463-2208299
S-1-000000000005-21-3842457870-236724463-2208299
0
Count = 1

Virtual Server ID:


Server Type:
Name:
Domain Name:
MTAP:
Node Type:
WINS Servers:
Machine SID:

3
Kerberos
10.254.144.28
NAU01
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
B
Count = 0
S-1-000000000005-21-3842457870-236724463-2208299

Local SID:

S-1-000000000005-21-3842457870-236724463-2208299

759
759-1653
Domain Controllers:
IP Addresses:
IP (0): 10.254.144.28

0
Count = 1

node7::system*> diag secd connections show


Usage:
[-vserver] <vserver>
*Virtual server
[ -type <text> ]
*Cache type
(lsa,netlogon,ldap-ad,ldap-nis-namemap,nis)
[ -key <text> ]
*Connection key
node7::system*> diag secd connections show -vserver vjames
[Cache: NetLogon/nau01.netappu.com - Hits: 0, misses: 0, average retrieval: 31.
50ms]
+ Rank: 01 - Server: 10.254.132.50 (svldc01.nau01.netappu.com) - Created 1257.
4 mins ago
Used 1 time(s), and has been available for 75445 secs
RTT in ms: mean=0.00, min=0, max=0, med=0, dev=0.00 (0.0 mins of
data)
[Cache: LSA/nau01.netappu.com - Hits: 0, misses: 0, average retrieval: 15.00ms]
+ Rank: 01 - Server: 10.254.132.50 (svldc01.nau01.netappu.com) - Created 1257.
4 mins ago
Used 1 time(s), and has been available for 75445 secs
RTT in ms: mean=0.00, min=0, max=0, med=0, dev=0.00 (0.0 mins of
data)

[Cache: LDAP (Active Directory)/nau01.netappu.com - Hits: 0, misses: 6, average


retrieval: 9.78ms]
+ Rank: 01 - Server: 10.254.132.50 (svldc01.nau01.netappu.com) - Created 59.7
mins ago
Used 1 time(s), and has been available for 3582 secs
RTT in ms: mean=0.00, min=0, max=0, med=0, dev=0.00 (0.0 mins of
data)

node7::system*> system reboot -node node7 -dump true


login: panic: sysctl debug.panic
version: NetApp Release 8.0RC1: Thu Aug 13 15:59:02 PDT 2009
cpuid = 1
KDB: stack backtrace:
panic() at panic+0x3cb
panic() at panic+0x460
sysctl_wire_old_buffer() at sysctl_wire_old_buffer+0x131
userland_sysctl() at userland_sysctl+0x117
__sysctl() at __sysctl+0xd3
syscall() at syscall+0x791
Xfast_syscall() at Xfast_syscall+0xaa
--- syscall (202, FreeBSD ELF64, __sysctl), rip = 0x8065e154c, rsp = 0x7ffff79ac
c68, rbp = 0x7ffff79acd30 --Uptime: 22h25m35s
PANIC: sysctl debug.panic in process mgwd on release NetApp Release 8.0RC1 (C) o
n Thu Dec 2 20:22:17 GMT 2010
version: NetApp Release 8.0RC1: Thu Aug 13 15:59:02 PDT 2009
compile flags: amd64, x86_64
DUMPCORE: START
Dumping to disks: 0a.125
................................................................................
..............................................................
DUMPCORE: END -- coredump written.
System rebooting...
cpu_reset called on cpu#1
Phoenix TrustedCore(tm) Server
Copyright 1985-2004 Phoenix Technologies Ltd.
All Rights Reserved
BIOS version: 2.3.0
Portions Copyright (c) 2006-2009 NetApp All Rights Reserved
CPU= Dual Core AMD Opteron(tm) Processor 265 X 2
Testing RAM
512MB RAM tested
8192MB RAM installed
Fixed Disk 0: STEC
NACF1GM1U-B11

Boot Loader version 1.6.1


Copyright (C) 2000-2003 Broadcom Corporation.
Portions Copyright (C) 2002-2009 NetApp

CPU Type: Dual Core AMD Opteron(tm) Processor 265


Starting AUTOBOOT press Ctrl-C to abort...
Loading x86_64/freebsd/image2/kernel:....0x100000/3277608 0x520340/3198128 0x82c
ff0/562512 Entry at 0x801445e0
Loading x86_64/freebsd/image2/platform.ko:0x8b7000/147808 0x921d70/156600 0x8db1
60/456 0x948128/1200 0x8db328/616 0x9485d8/1848 0x8db590/15629 0x8df2a0/20870 0x
8e4428/80 0x948d10/240 0x8e4478/576 0x948e00/1728 0x8e46b8/304 0x9494c0/912 0x8e
47e8/48 0x949850/144 0x8e4820/48000 0x9498e0/56712 0x8f03a0/425 0x90ae70/3090 0x
921c81/237 0x90ba88/47400 0x9173b0/43217
Starting program at 0x801445e0
NetApp Data ONTAP Release 8.0RC1 Cluster-Mode
Copyright (C) 1992-2009 NetApp.
All rights reserved.
*******************************
*
*
* Press Ctrl-C for Boot Menu. *
*
*
*******************************
arp_rtrequest: bad gateway 127.0.20.1 (!AF_LINK)
BSD initialization for BSD <-> Ontap communication Done!
add host 127.0.20.1: gateway 127.0.20.1
add host 127.0.10.1: gateway 127.0.20.1
Reservation conflict found on this node's disks!
Local System ID: 118060269
Press Ctrl-C for Maintenance menu to release disks.
Disk reservations have been released
7 mode networking configuration change is disallowed while in 10 mode.
Waiting for giveback...(Press Ctrl-C to abort wait)Continuing boot...
Doesn't use '/etc/syslog.conf', no syslogd
Skipping adding config files for console for 0
Vdisk Snap Table for host:0 is initialized
fcp_service: FCP is not licensed.
ONTAP EMS log disabled. User space <notifyd> processes EMS log file
netapp_varfs: Failed to backup /var to CF.
mroot is now available
mroot is now available
filter sync'd
Thu Dec 2 20:28:14 GMT 2010
login: /var: optimization changed from TIME to SPACE
login:
login: admin
Password:
node7::> system coredump show
Node:Type Core Name
Saved Panic Time
--------- ------------------------------------------- ----- ----------------node7:kernel
core.118060269.2010-12-02.20_22_17.nz
true 12/2/2010 20:22:17
node7::> system coredump save
Usage:
[-node] <nodename>
Node That Owns the Coredump
[-corename] <text>
Coredump Name
node7::> system coredump save -name node7

ERROR: invalid argument "-name"


node7::> system coredump save -node node7
ERROR: Either specify all keys, or set at least one key to "*".
node7::> system -node node7 coredump save
ERROR: "-node" is not a recognized command
node7::> system node run -node node7 coredump save
coredump not found. Type '?' for a list of commands
node7::>

(Login timeout will occur in 60 seconds)

node7::>

(Login timeout will occur in 50 seconds)

node7::>

(Login timeout will occur in 40 seconds)

node7::>

(Login timeout will occur in 30 seconds)

node7::>

(Login timeout will occur in 20 seconds)

node7::>

(Login timeout will occur in 10 seconds)

node7::>
Exiting due to timeout
login:
login:
login:
login:
login:
login:
login:
login:
login:
login:
login:
login: admin
Password:
node7::>
node7::>
node7::>
node7::> (Login timeout will occur in 60 seconds)
node7::>

(Login timeout will occur in 50 seconds)

node7::>

(Login timeout will occur in 40 seconds)

node7::>

(Login timeout will occur in 30 seconds)

node7::>

(Login timeout will occur in 20 seconds)

node7::>

(Login timeout will occur in 10 seconds)

node7::> dashboard performance show


Average
---Data-Network--- -Cluster--Network- ----Storage--Total Latency CPU Busy Recv Sent Busy Recv Sent Read Write
Ops/s in usec Busy Util MB/s MB/s Util MB/s MB/s MB/s MB/s

------ ------- ---- ---- ------ ------ ---- ------ ------ ------ -----node7
0
0 4% 0%
0
0 0%
0
0
0
0
node8
0
1743 4% 0%
0
0 0%
0
0
0
0
cluster:summary
0
1743 4% 0%
0
0 0%
0
0
0
0
3 entries were displayed.
node7::> stat show ?
(statistics show)
[ -descriptions | -instance
[[-node] <nodename>]
[[-category] <category>]
[[-object] <text>]
[[-counter] <text>]
[ -value <Counter64> ]
[ -delta <text> ]
[ -description <text> ]
[ -prop <text> ]

| -fields <fieldname>, ... ]


Node
Category
Object
Counter
Value
Delta
Description
Properties

node7::> stat show


(statistics show)
Node: node7
Category.Object.Counter
Value
Delta
----------------------------------------------- ------------- ------------node.node.cpu-busy
17%
node.node.total-ops
0
node.node.nfs-ops
0
node.node.cifs-ops
0
node.node.data-busy
0%
node.node.data-recv
0B
node.node.data-sent
0B
node.node.cluster-busy
0%
node.node.cluster-recv
91.7MB
node.node.cluster-sent
73.5MB
node.node.disk-read
83.1MB
node.node.disk-write
958MB
latency.latency.nfs-ops
0
latency.latency.nfs-latency
0us
latency.latency.cifs-ops
0
latency.latency.cifs-latency
0us
latency.latency.nlm-ops
0
latency.latency.nlm-latency
0us
Node: node7
Category.Object.Counter
Value
Delta
----------------------------------------------- ------------- ------------latency.latency.mount-ops
0
latency.latency.mount-latency
0us
latency.latency.sm-ops
0
latency.latency.sm-latency
0us
latency.latency.portmap-ops
0
latency.latency.portmap-latency
0us
disk.0a.112.total_transfers
17480
disk.0a.112.user_read_chain
1
disk.0a.112.user_reads
2450
disk.0a.112.user_write_chain
3
disk.0a.112.user_writes
14904
-

disk.0a.112.user_writes_in_skip_mask
0
disk.0a.112.user_skip_write_ios
0
disk.0a.112.cp_read_chain
2
disk.0a.112.cp_reads
126
disk.0a.112.guarenteed_read_chain
0
disk.0a.112.guarenteed_reads
0
disk.0a.112.guarenteed_write_chain
0
disk.0a.112.guarenteed_writes
0
Press <space> to page down, <return> for next line, or 'q' to quit... (Login t
imeout will occur in 60 seconds)
Press <space> to page down, <return> for next li
ne, or 'q' to quit... (Login timeout will occur in 50 seconds)
Press <space> to
page down, <return> for next line, or 'q' to quit... (Login timeout will occu
r in 40 seconds)
Press <space> to page down, <return> for next line, or 'q' to qu
it... (Login timeout will occur in 30 seconds)
Press <space> to page down, <ret
urn> for next line, or 'q' to quit... (Login timeout will occur in 20 seconds)
P
ress <space> to page down, <return> for next line, or 'q' to quit... (Login ti
meout will occur in 10 seconds)
Press <space> to page down, <return> for next lin
e, or 'q' to quit... 37 entries were displayed.
node7::>

You might also like