How to Setup Syslog in Clustered ONTAP

In the following post, we setup syslog monitoring for Clustered Data ONTAP. The command examples taken below are from a 2-node Clustered ONTAP 8.2 SIM cluster. And for our Syslog server we’re using the ‘Kiwi Syslog Service Manager (Free Version 9.4).’

How to Configure Events to go to a Syslog Server in Clustered ONTAP

The following commands will show current event destinations; create a new destination called ‘syslogger’ to send to our syslog server on 10.0.0.1; show the current event config; set the CLI to not paginate the output for the next bit; show event routes for all 6908 events; add syslogger as a destination for all 6908 events; verify the event routes are modified; test we can ping the syslog server; set to diag privilege level; get the output of ‘event config show’ at diag level (there’s much more there in diag); then generate an event to test the syslog server is receiving messages.

::> event destination show
::> event destination create -name syslogger -syslog 10.0.0.1
::> event config show
::> set -rows 0
::> event route show

Important note: The next line will add syslogger as a destination for all 6908 events. Without more testing/investigation on my part, this might not be a good idea to do on a production cluster. If you can decide which events you want to route to the syslogger, that would be advisable to do, but with 6908 possible events it’s not easy to choose and there’s no options in the 8.2 CLI to just route say CRITICAL events to the syslogger. Please see the Appendix A for my reasoning on this!

::> event route add-destinations -messagename * -destinations syslogger
::> event route show
::> network ping -node NAC1-01 -destination 10.0.0.1
::> set diag
::*> event config show
::*> event generate -messagename cf.fm.takeoverStarted -values 1

How to setup the Kiwi Syslog Service Manager Free Version

Kiwi Syslog Service Manager is very easy to install, download from http://www.kiwisyslog.com/free-edition.aspx and run the installer.

Once it is installed, go to the file menu and Setup.

Image: Kiwi Syslog Free > File > Setup
Go to the inputs section and add the node management IP addresses from our node Vservers (in Clustered Data ONTAP, AutoSupport and syslog events are delivered from the node-mgmt LIF per node.)

Image: Kiwi Syslog Free - add hosts to receive events from

In the Inputs > UDP section: check the ‘Listen for UDP Syslog messages’ is ticked and enter the ‘Bind to address’ of the Syslog server.

Image: Kiwi Syslog Free - Listen for UDP Syslog messages on UDP port 514 (default)
Then click OK to the ‘Kiwi Syslog Server Setup’ window.

Verify firewall policy allows UDP connection from the node-mgmt LIFs to the syslog server on port 514.

To check in the Windows DOS prompt that the syslog server is listening on port 514, run the command:

> netstat -an

And look for something like:

UDP    10.0.0.1:514           *:*

Appendix A: Why I Have Reserverations on Doing ::> event route add-destinations -messagename * -destinations syslogger

The reason why I can’t recommend is that I managed to break the SIM. When I added syslogger as a destination for all event routes, quite rapidly the root volume filled up. Now, this was on a SIM and only a 1.6GB root volume, the minimum on a production system is 250GB, so this is very unlikely to happen on a production system, the logs should cycle well before it becomes a problem, this is just a word of caution.

Dipping into the systemshell and browsing to:

/mroot/etc/log/mlog

I noticed the following log files (which reach a maximum size of 100MB before creating a new log):

notifyd.log.*

 - were generating roughly 1 every 5 minutes, or 20MB a minute. This soon filled up the SIMs root volume!

Appendix B: A few commands I found useful in the event of the “The root volume (/mroot) is dangerously low on space (<10mb o:p="">

If you get the above problem on a production system, call NetApp Support straight away and log a P1, the notes below are simply down for my own convenience! Caveat lector!!!

From the clustershell (::> prompt):

system node run -node {NODENAME}

From the nodeshell (> prompt):

df
snap list
snap delete vol0 {SNAPSHOT NAMES}
snap sched vol0 0 0 0
snap reserve vol0 0

There is probably a very good reason why not to set the root volume snap schedule to 0 0 0, and not to set the snap reserve to 0, one to investigate further!

df
df -A
vol size vol0 {UP_TO_95%_OF AGGR_SIZE}

snap list -A
snap delete -A aggr0 {SNAPSHOT NAMES}
snap sched -A aggr0 0 0 0

Interesting how by default even new aggregates don’t default to a snap sched of 0 0 0, perhaps this problem manifests in the SIM on 8.2 only, another one to investigate further! (Note: In 8.2P4, new aggregates by default still get a snap sched of 0 weekly 1 nightly 4@9,14,19 hourly!?)

How to get to the systemshell from the clustershell (::>):

security login unlock -username diag
security login password -username diag
set -privilege advanced
systemshell local

Useful commands in the systemshell (% prompt):

df
cd {PATH}
ls
ls -l
du {PATH/FOLDER NAME}
du -ah
pwd
cd /mroot/etc/log/mlog
rm notifyd.log.*
cd /mroot/etc/log/autosupport
rm -Rf *
cd /mroot/etc/crash
sudo rm core.*

Back in the clustershell (::>):

::> event route remove-destinations -messagename * -destinations syslogger

THE END!

Comments