Netapp Clustered Ontap CLI Pocket Guide

Netapp Clustered Ontap CLI Pocket Guide

On this page I will be constantly adding Netapp Clustered Data Ontap CLI commands as an easy reference Pocket Guide
Most Clustered Ontap commands work in 8.x, however some require 9.x (I have created a note if they require a 9.x version)
(Updated 04-Oct-2018)

MISC

set -privilege advanced (Enter into privilege mode)
set -privilege diagnostic (Enter into diagnostic mode)
set -privilege admin (Enter into admin mode)
system timeout modify 30 (Sets system timeout to 30 minutes)
system node run – node local sysconfig -a (Run sysconfig on the local node)
The symbol ! means other than in clustered ontap i.e. storage aggregate show -state !online (show all aggregates that are not online)
node run -node <node_name> -command sysstat -c 10 -x 3 (Running the sysstat performance tool with cluster mode)
system node image show (Show the running Data Ontap versions and which is the default boot)
dashboard performance show (Shows a summary of cluster performance including interconnect traffic)
node run * environment shelf (Shows information about the Shelves Connected including Model Number)
network options switchless-cluster show (Displays if nodes are setup for cluster switchless or switched – need to be in advanced mode)
network options switchless-cluster modify true (Sets the nodes to use cluster switchless, setting to false sets the node to use cluster switches – need to be in advanced mode)
security login banner show (Show the current login banner)
security login banner modify -message “Only Authorized users allowed!” (Set the login banner to Only Authorized users allowed)
security login banner modify -message “” (Clears the login banner)
security login motd show (Shows the current Message of the day)
security login motd modify -vserver vserver1 (Modify the Message of the day, use the variable below)
  • Operating System = s
  • Software Version = r
  • Node Name = n
  • Username = N
  • Time = t
  • Date = d
security login motd modify -vserver vserver1 -message “” (Clears the current Message of the Day)

DIAGNOSTICS USER CLUSTERED ONTAP

security login unlock -username diag (Unlock the diag user)
security login password -username diag (Set a password for the diag user)
security login show -username diag (Show the diag user)

SYSTEM CONFIGRUATION BACKUPS FOR CLUSTERED ONTAP

system configuration backup create -backup-name node1-backup -node node1 (Create a cluster backup from node1)
system configuration backup create -backup-name node1-backup -node node1 -backup-type node (Create a node backup of node1)
system configuration backup settings modify -destination ftp://192.168.1.10/ -username backups (Sets scheduled backups to go to this destination URL)
system configuration backup settings set-password (Set’s the backup password for the destination URL above)

LOGS

To look at the logs within clustered ontap you must log in as the diag user to a specific node
set -privilege advanced
systemshell -node <node_name>
username: diag
password: <your diag password>
cd /mroot/etc/mlog
cat command-history.log | grep volume (searches the command-history.log file for the keyword volume)
exit (exits out of diag mode)
http://<cluster-ip address>/spi (loging with your username and password, from here you can browse logs and core dumps)

COREDUMP

system coredump status (shows unsaved cored, saved cores and partial cores)
system coredump show (lists coredump files and panic dates)

PANIC STRINGS

SP> priv set advanced (Enter advanced mode within the service-processor console)
SP> system log (list the service-processor system log. Within the first few lines you will see the panic string)
Copy the panic string and type it into the NetApp Panic String Analyzer

SERVICE PROCESSOR

system node image get -package http://webserver/306-02765_A0_SP_3.0.1P1_SP_FW.zip -replace-package true (Copies the firmware file from the webserver into the mroot directory on the node)
system node service-processor image update -node node1 -package 306-02765_A0_SP_3.0.1P1_SP_FW.zip -update-type differential (Installs the firmware package to node1)
system node service-processor show (Show the service processor firmware levels of each node in the cluster)
system node service-processor image update-progress show (Shows the progress of a firmware update on the Service Processor)
service-processor reboot-sp -node NODE1 (reboot the sp of node1)

DISK SHELVES

storage shelf show (an 8.3 command that displays the loops and shelf information)

AUTOSUPPORT

system node autosupport budget show -node local (In diag mode – displays current time and size budgets)
system node autosupport budget modify -node local -subsystem wafl -size-limit 0 -time-limit 10m (In diag mode – modification as per Netapp KB1014211)
system node autosupport show -node local -fields max-http-size,max-smtp-size (Displays max http and smtp sizes)
system node autosupport modify -node local -max-http-size 0 -max-smtp-size 8MB (modification as per Netapp KB1014211)

NTP

cluster time-service ntp server create (Configure an NTP server or multiple NTP servers)
cluster time-service ntp server show (Show the current NTP servers)
cluster time-service ntp server modify (Modify the NTP server list)
cluster time-service ntp server delete (Deletes an NTP server)
cluster time-service ntp server reset (Resets configuration, removes all existing NTP servers)
cluster date show (Displays the cluster date)
cluster date modify (Modify the cluster date)

CLUSTER

set -privilege advanced (required to be in advanced mode for the below commands)
cluster statistics show (shows statistics of the cluster – CPU, NFS, CIFS, FCP, Cluster Interconnect Traffic)
cluster ring show -unitname vldb (check if volume location database is in quorum)
cluster ring show -unitname mgmt (check if management application is in quorum)
cluster ring show -unitname vifmgr (check if virtual interface manager is in quorum)
cluster ring show -unitname bcomd (check if san management daemon is in quorum)
cluster unjoin (must be run in priv -set admin, disjoins a cluster node. Must also remove its cluster HA partner)
debug vreport show (must be run in priv -set diag, shows WAFL and VLDB consistency)
event log show -messagename scsiblade.* (show that cluster is in quorum)
cluster kernel-service show -list (in diag mode, displays in quorum information)
debug smdb table bcomd_info show (displays database master / secondary for bcomd)

NODES

system node rename -node <current_node_name> -newname <new_node_name>
system node reboot -node NODENAME -reason ENTER REASON (Reboot node with a given reason. NOTE: check ha policy)

FLASH CACHE

system node run -node * options flexscale.enable on (Enabling Flash Cache on each node)
system node run -node * options flexscale.lopri_blocks on (Enabling Flash Cache on each node)
system node run -node * options flexscale.normal_data_blocks on (Enabling Flash Cache on each node)
node run NODENAME stats show -p flexscale (fashcache configuration)
node run NODENAME stats show -p flexscale-access (display flash cache statistics)

FLASH POOL

storage aggregate modify -hybrid-enabled true (Change the AGGR to hybrid)
storage aggregate add-disks -disktype SSD (Add SSD disks to AGGR to begin creating a flash pool)
priority hybrid-cache set volume1 read-cache=none write-cache=none (Within node shell and diag mode disable read and write cache on volume1)

FAILOVER

storage failover takeover -bynode <node_name> (Initiate a failover)
storage failover giveback -bynode <node_name> (Initiate a giveback)
storage failover modify -node <node_name> -enabled true (Enabling failover on one of the nodes enables it on the other)
storage failover show (Shows failover status)
storage failover modify -node <node_name> -auto-giveback false (Disables auto giveback on this ha node)
storage failover modify -node <node_name> -auto-giveback enable (Enables auto giveback on this ha node)
aggregate show -node NODENAME -fields ha-policy (show SFO HA Policy for aggregate)

AGGREGATES

aggr create -aggregate <aggregate_name> -diskcount <the number of disks you wish to add> -raidtype raid_dp -maxraidsize 18 (Create an AGGR with X amount of disks, raid_dp and raidgroup size 18)
aggr offline | online (Make the aggr offline or online)
aggr rename -aggregate <aggr_name> -newname <new_aggr_name) (Change the name of an existing aggr)
aggr relocation start -node node01 -destination node02 -aggregate-list aggr1 (Relocate aggr1 from node01 to node02)
aggr relocation show (Shows the status of an aggregate relocation job)
aggr show -space (Show used and used% for volume foot prints and aggregate metadata)
aggregate show (show all aggregates size, used% and state)
aggregate add-disks -aggregate <aggregate_name> -diskcount <number_of_disks> (Adds a number of disks to the aggregate)
reallocate measure -vserver vmware -path /vol/datastore1 -once true (Test to see if the volume datastore1 needs to be reallocated or not)
reallocate start -vserver vmware -path /vol/datastore1 -force true -once true (Run reallocate on the volume datastore1 within the vmware vserver)

DISKS

storage disk assign -disk 0a.00.1 -owner <node_name> (Assign a specific disk to a node) OR
storage disk assign -count <number_of_disks> -owner <node_name> (Assign unallocated disks to a node)
storage disk show -ownership (Show disk ownership to nodes)
storage disk show -state broken | copy | maintenance | partner | percent | reconstructing | removed | spare | unfail |zeroing (Show the state of a disk)
storage disk modify -disk NODE1:4c.10.0 -owner NODE1 -force-owner true (Force the change of ownership of a disk)
storage disk removeowner -disk NODE1:4c.10.0 -force true (Remove ownership of a drive)
storage disk set-led -disk Node1:4c.10.0 -action blink -time 5 (Blink the led of disk 4c.10.0 for 5 minutes. Use the blinkoff action to turn it off)

 VSERVER

vserver setup (Runs the clustered ontap vserver setup wizard)
vserver create -vserver <vserver_name> -rootvolume <volume_name> (Creates a new vserver)
vserver show (Shows all vservers in the system)
vserver show -vserver <vserver_name> (Show information on a specific vserver)

VOLUMES

volume create -vserver <vserver_name> -volume <volume_name> -aggregate <aggr_name> -size 100GB -junction-path /eng/p7/source (Creates a Volume within a vserver)
volume move -vserver <vserver_name> -volume <volume_name> -destination-aggregate <aggr_name> -foreground true (Moves a Volume to a different aggregate with high priority)
volume move -vserver <vserver_name> -volume <volume_name> -destination-aggregate <aggr_name> -cutover-action wait (Moves a Volume to a different aggregate with low priority but does not cutover)
volume move trigger-cutover -vserver <vserver_name> -volume <volume_name> (Trigger a cutover of a volume move in waiting state)
volume move show (shows all volume moves currently active or waiting. NOTE: You can only do 8 volume moves at one time, more than 8 and they get queued)
system node run – node <node_name> vol size <volume_name> 400g (resize volume_name to 400GB) OR
volume size -volume <volume_name> -new-size 400g (resize volume_name to 400GB)
volume modify -vserver <vserver_name> -filesys-size-fixed false -volume <volume_name> (Turn off fixed file sizing on volumes)
volume recovery-queue purge-all (An 8.3 command that purges the volume undelete cache)
volume show -vserver SVM1 -volume * -autosize true (Shows which volumes have autosize enabled)
volume show -vserver SVM1 -volume * -atime-update true (Shows which volumes have update access time enabled)
volume modify -vserver SVM1 -volume volume1 -atime-update false (Turns update access time off on the volume)

LUNS

lun show -vserver <vserver_name> (Shows all luns belonging to this specific vserver)
lun modify -vserver <vserver_name> -space-allocation enabled -path <lun_path> (Turns on space allocation so you can run lun reclaims via VAAI)
lun geometry -vserver <vserver_name> path /vol/vol1/lun1 (Displays the lun geometry)
lun mapping add-reporting-nodes -vserver <vserver_name> -volume <vol name> -lun <lun path> -igroup <igroup name> -destination-aggregate <aggregate name> (Adds the igroup as reporting nodes for the lun)
lun mapping show -vserver <vserver name> -volume <volume name> -fields reporting-nodes (Show reporting nodes for a specific volume)

NFS

vserver <vserver_name> modify -4.1 -pnfs enabled (Enable pNFS. NOTE: Cannot coexist with NFSv4)

FCP

storage show adapter (Show Physical FCP adapters)
fcp adapter modify -node NODENAME -adapter 0e -state down (Take port 0e offline)
node run <nodename>fcpadmin config (Shows the config of the adapters – Initiator or Target)
node run <nodename> -t target 0a (Changes port 0a from initiator or target – You must reboot the node)
vserver fcp ping-initiator (Ping check between initiator and lifs)
vserver fcp ping-igroup (Ping check between igroup and lifs)

CIFS

vserver cifs modify -vserver <vserver_name> -default-site AD-DC-Site (Ontap 9.4 – Specify a Active Directory Site)
vserver cifs options modify -vserver <vserver_name> -is-large-mtu-enabled false (Ontap 9.x set to false due to bug ID: Netapp Bug ID 1139257 )
cifs domain discovered-servers discovery-mode modify -vserver <vserver name> -mode site (Ontap 9.3 – Set Domain Controller discover to single site)
vserver cifs create -vserver <vserver_name> -cifs-server <node_name> -domain <domain_name> (Enable Cifs)
vserver cifs share create -share-name root -path / (Create a CIFS share called root)
vserver cifs share show
vserver cifs show

SMB

vserver cifs options modify -vserver <vserver_name>-smb2-enabled true (Enable SMB2.0 and 2.1)

SNAPSHOTS

volume snapshot create -vserver vserver1 -volume vol1 -snapshot snapshot1 (Create a snapshot on vserver1, vol1 called snapshot1)
volume snapshot restore -vserver vserver1 -volume vol1 -snapshot snapshot1 (Restore a snapshot on vserver1, vol1 called snapshot1)
volume snapshot show -vserver vserver1 -volume vol1 (Show snapshots on vserver1 vol1)
snap autodelete show -vserver SVM1 -enabled true (Shows which volumes have autodelete enabled)

DP MIRRORS AND SNAPMIRRORS

volume create -vserver <vserver_name> -volume vol10_mirror -aggregate <destination_aggr_name> -type DP (Create a destinaion Snapmirror Volume)
snapmirror create -vserver <vserver_name> -source-path sysadmincluster://vserver1/vol10 -destination -path sysadmincluster://vserver1/vol10_mirror -type DP (Create a snapmirror relationship for sysadmincluster)
snapmirror initialize -source-path sysadmincluster://vserver1/vol10 -destination-path sysadmincluster://vserver1/vol10_mirror -type DP -foreground true (Initialize the snapmirror example)
snapmirror update -source-path vserver1:vol10 -destination-path vserver2:vol10_mirror -throttle 1000 (Snapmirror update and throttle to 1000KB/sec)
snapmirror modify -source-path vserver1:vol10 -destination-path vserver2:vol10_mirror -throttle 2000 (Change the snapmirror throttle to 2000)
snapmirror restore -source-path vserver1:vol10 -destination-path vserver2:vol10_mirror (Restore a snapmirror from destination to source)
snapmirror show (show snapmirror relationships and status)
NOTE: You can create snapmirror relationships between 2 different clusters by creating a peer relationship

SNAPVAULT

snapmirror create -source-path vserver1:vol5 -destination-path vserver2:vol5_archive -type XDP -schedule 5min -policy backup-vspolicy (Create snapvault relationship with 5 min schedule using backup-vspolicy)
NOTE: Type DP (asynchronous), LS (load-sharing mirror), XDP (backup vault, snapvault), TDP (transition), RST (transient restore)

DEDUPE

volume efficiency on -vserver SVM1 -volume volume1 (Turns Dedupe on for this volume)
volume efficiency start -vserver SVM1 -volume volume1 -dedupe true -scan-old-data true (Starts a volume efficiency dedupe job on volume1, scanning old data)
volume efficiency start -vserver SVM1 -volume volume1 -dedupe true (Starts a volume efficiency dedupe job on volume1, not scanning old data)
volume efficiency show -op-status !idle (This will display the running volume efficiency tasks)

NETWORK INTERFACE

network interface show (show network interfaces)
network interface modify -vserver vserver1 -lif cifs1 -address 192.168.1.10 -netmask 255.255.255.0 -force-subnet-association (Data Ontap 8.3 – forces the lif to use an IP address from the subnet range that has been setup)
network port show (Shows the status and information on current network ports)
network port modify -node * -port <vif_name> -mtu 9000 (Enable Jumbo Frames on interface vif_name>
network port modify -node * -port <data_port_name> -flowcontrol-admin none (Disables Flow Control on  port data_port_name)
network interface revert * (revert all network interfaces to their home port)
ifgrp create -node <node_name> -ifgrp <vif_name> -distr-func ip -mode multimode (Create an interface group called vif_name on node_name)
network port ifgrp add-port -node <node_name> -ifgrp <vif_name> -port <port_name> (Add a port to vif_name)
net int failover-groups create -failover-group data_<vif_name>_fg -node <node_name> -port <vif_name> (Create a failover group – Complete on both nodes)
ifgrp show (Shows the status and information on current interface groups)
net int failover-groups show (Show Failover Group Status and information)
node run node1 ifstat -a (shows interface statistics such as crc errors)
node run node1 ifstat -z (clears interface statistics, optionally specify the interface name to clear for that specific interface)

ROUTING GROUPS

network interface show-routing-group (show routing groups for all vservers)
network routing-groups show -vserver vserver1 (show routing groups for vserver1)
network routing-groups route create -vserver vserver1 -routing-group 10.1.1.0/24 -destination 0.0.0.0/0 -gateway 10.1.1.1 (Creates a default route on vserver1)
ping -lif-owner vserver1 -lif data1 -destination www.google.com (ping www.google.com via vserver1 using the data1 port)

DNS

services dns show (show DNS)

UNIX

vserver services unix-user show
vserver services unix-user create -vserver vserver1 -user root -id 0 -primary-gid 0 (Create a unix user called root)
vserver name-mapping create -vserver vserver1 -direction win-unix -position 1 -pattern (.+) -replacement root (Create a name mapping from windows to unix)
vserver name-mapping create -vserver vserver1 -direction unix-win -position 1 -pattern (.+) -replacement sysadmin011 (Create a name mapping from unix to windows)
vserver name-mapping show (Show name-mappings)

NIS

vserver services nis-domain create -vserver vserver1 -domain vmlab.local -active true -servers 10.10.10.1 (Create nis-domain called vmlab.local pointing to 10.10.10.1)
vserver modify -vserver vserver1 -ns-switch nis-file (Name Service Switch referencing a file)
vserver services nis-domain show

NTP

system services ntp server create -node <node_name> -server <ntp_server> (Adds an NTP server to node_name)
system services ntp config modify -enabled true (Enable ntp)
system node date modify -timezone <Area/Location Timezone> (Sets timezone for Area/Location Timezone. i.e. Australia/Sydney)
node date show (Show date on all nodes)

DATE AND TIME

timezone -timezone Australia/Sydney (Sets the timezone for Sydney. Type ? after -timezone for a list)
date 201307090830 (Sets date for yyyymmddhhmm)
date -node <node_name> (Displays the date and time for the node)

CONVERGED NETWORK ADAPTERS (FAS 8000)

ucadmin show -node NODENAME (Show CNA ports on specific node)
ucadmin -node NODENAME -adapter 0e -mode cna (Change adapter 0e from FC to CNA. NOTE: A reboot of the node is required)

PERFORMANCE

show-periodic -object volume -instance volumename -node node1 -vserver vserver1 -counter total_ops|avg_latency|read_ops|read_latency (Show the specific counters for a volume)
statistics show-periodic 0object nfsv3 -instance vserver1 -counter nfsv3_ops|nfsv3_read_ops|nfsv3_write_ops|read_avg_latency|write_avg_latency (Shows the specific nfsv3 counters for a vserver)
sysstat -x 1 (Shows counters for CPU, NFS, CIFS, FCP, WAFL)

GOTCHA’S

Removing a port from an ifgrp – To remove a port from an ifgrp, you must first shut down any sub-interfaces of that ifgrp. For example if your ifgrp is named a0a and you have a vlan on it called a0a-100, you must first shut down a0a-100, you will then be able to remove the port from the ifgrp
FCoE Lif Moves – To move a FCoE lif from it’s current home-port in clustered ontap, you must first offline the FCoE lif, perform the lif move and then online the lif
Disclaimer:
All the tutorials included on this site are performed in a lab environment to simulate a real world production scenario. As everything is done to provide the most accurate steps to date, we take no responsibility if you implement any of these steps in a production environment.

Comments

Popular posts from this blog

SNMP Configuration on Brocade Switches

Cluster Mode Architecture