Changing Munin 2 to collect data every minute

By default in most Linux distributions, Munin will only poll every 5 minutes, which is often too low-resolution for modern production environments. Here's how to convert Munin 2.0 to poll every minute.

Changing the munin-update polling frequency

Generally you need to both change update_rate from 300 to 60 in munin.conf and change the crontab entry - in /etc/cron.d/munin - from */5 to * so that munin-update is actually triggered every minute.

This will not only change the poll frequency, but also the frequency with which the HTML and graph jobs are run. I find that the HTML job is fine, but you absolutely need to change to using FCGI graphing processes rather than rendering the graphs after each update. Aside from avoiding wasteful rerendering of graphs no-one is looking at, FCGI graphing gives you zoomable detailed graphs - and higher-resolution data isn't much use without them.

To do that, change graph_strategy cron to graph_strategy cgi, and start the munin-cgi-graph processes using the spawn-fcgi helper (preferrably run from a supervisor process or init system such as upstart or systemd, since the graphing jobs are not very robust).

Upgrading legacy RRD files

If you have been using Munin for a while you're probably reluctant to lose your historic data. Justin Silver has improved and shared the script posted on a ticket some time ago which can do this.

The python script processes only individual RRD files. Here's a simple bash script that you can use to call it for each file:

find /var/lib/munin -type f -iname "*.rrd" -print0 | while IFS= read -r -d $'\0' filename; do
	echo $filename
	rrdtool dump $filename > temp.5.xml
	./ temp.5.xml 5 > temp.1.xml
	rm $filename
	rrdtool restore temp.1.xml $filename

The '5' in there means to duplicate each data point 5 times, effectively scaling it from every 5 minutes to every minute.

You need to do this while munin is not running. If you're building a new munin VM it's easiest to just copy the files on from the old VM, run this script on the new VM, and then install munin afterwards. That way you can be sure no polls will be running at the same time. Don't forget to make sure all the files are owned by the munin user after you run the script.

If you're upgrading an existing Munin VM, don't forget to stop munin polls by temporarily commenting out the entries in /etc/cron.d/munin (or whereever your Linux distribution runs it from). You should also take a backup of the files first because my bash script above overwrites the old files, and you should run it under dtach, screen, or nohup so that it will carry on if your terminal gets disconnected (it's not a fast process as it roundtrips through XML and runs the process anew for each file - you could rewrite the bash script in Python if speed is important to you).

Here's Justin's python script again, with two tiny typos fixed - and using /usr/bin/env to get better cross-platform compatibility:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see .
import sys
from copy import deepcopy
from StringIO import StringIO
    from lxml import etree
except ImportError:
        import xml.etree.cElementTree as etree
    except ImportError:
            import xml.etree.ElementTree as etree
        except ImportError:
                import cElementTree as etree
            except ImportError:
                    import elementtree.ElementTree as etree
                except ImportError:
def main(dumpfile, factor):
    xmldoc = etree.parse(dumpfile)
    root = xmldoc.getroot()
    # change step, reducing it by a factor of "factor"
    step = root.find("step")
    old_step = int(step.text)
    new_step = old_step/factor
    step.text = str(new_step) 
    database = root.findall("rra/database")
    for d in database:
        index = 0
        count = len(d)
        while count > 0:
            for i in range(0, factor-1):
                d.insert(index+1, deepcopy(d[index]))
            index = index + factor
            count = count - 1
    print etree.tostring(root)
if __name__ == "__main__":
    # arguments
    if len(sys.argv) != 3:
        print " rrddump.xml factor"
    # call main
    main(sys.argv[1], int(sys.argv[2]))