See for the official Zabbix site.

Docs/howto/upgrade/Upgrade Zabbix 1.8 to 2.0 and Migrate Mysql to Postgresql

Jump to: navigation, search

Upgrade From 1.8 to 2.0

  • Author's Note
This is a copy of my notes made during the upgrade from Zabbix 1.8 on MySQL to Zabbix 2.0 on PostgreSQL with autopartitioning. It may still require clean up, and it has some redundant info copied from this wiki, but I thought I should post it while it still has a chance of being useful.

Rather than upgrade all the history data, I'm going to build a new server and migrate only the configuration to it. This is somewhat complicated due to the fact I'm also going to move to PostgreSQL and enable partitioning at the same time.

Upgrade Schema to 2.0

Upgrade the Zabbix 1.8 MySQL database to the 2.0 schema. Since my history is so big, and I don't want this to take forever, I'm going to only upgrade the configuration.

The steps will be:

  • Dump the configuration portion of the database.
  • Create a transient MySQL database with the contents of this dump.
  • Upgrade the transient database.

Create A Copy of Zabbix 1.8 MySQL Database

Since I don't want to convert all my history, I need to export the configuration tables and load them into a new database which will become my upgrade target.

Since we need all the tables to exist in the resulting database we are going empty schema of all the tables to load the export into.

Create Export Script to Dump Database

Dump all data except all alert, events, history, and trends tables. Leave the create table SQL out of the export.

  • Get a list of all the tables in the Zabbix 1.8 MySQL database.
echo 'show tables' | mysql zabbix > zabbix-1.8-tables.txt
  • Create a dump or export script which looks like this:
mysqldump --compact --complete-insert --extended-insert \
 --no-create-db --no-create-info \
  • Create a list of tables to ignore and add them to the export script:
for table in `grep -E '(acknowledge|alert|event|history|trend)' zabbix-1.8-tables.txt `; do  
 echo " --ignore-table=zabbix.$table \\"; >>
echo "zabbix > zabbix-1.8-export-to-mysql.sql" >>
  • You should wind up with a script that looks like this:
mysqldump --compact --complete-insert --extended-insert \
 --no-create-db --no-create-info \
 --ignore-table=zabbix.acknowledges \
 --ignore-table=zabbix.alerts \ \
 --ignore-table=zabbix.history \
 --ignore-table=zabbix.history_log \
 --ignore-table=zabbix.history_str \
 --ignore-table=zabbix.history_str_sync \
 --ignore-table=zabbix.history_sync \
 --ignore-table=zabbix.history_text \
 --ignore-table=zabbix.history_uint \
 --ignore-table=zabbix.history_uint_sync \
 --ignore-table=zabbix.proxy_dhistory \
 --ignore-table=zabbix.proxy_history \
 --ignore-table=zabbix.trends \
 --ignore-table=zabbix.trends_uint \
 --ignore-table=zabbix.user_history \
 zabbix > zabbix-1.8-export-to-mysql.sql

Dump Zabbix Configuration Database

  • Run on Zabbix 1.8 database server

Create Transient Zabbix 1.8 MySQL Database

Since we didn't dump all the tables

  • Create temporary Zabbix database
cat <<EOF | mysql -u root
drop database zabbix18;
create database zabbix18;

create user zabbix18 identified by "zabbix18";
grant all on zabbix18.* to zabbix18;
flush privileges;
  • Build empty Zabbix 1.8 schema which will provide all the tables we didn't export. Without this the upgrade will bomb.
# get complete schema from zabbix-server-mysql-1.8
# /usr/share/doc/zabbix-server-mysql-1.8.16/create/schema/mysql.sql
mysql -u zabbix18 -p zabbix18 < zabbix-1.8-mysql-schema.sql

Load Transient Zabbix 1.8 MySQL Database

  • Load the config dump from production MySQL database.
mysql -u zabbix18 -p zabbix18 < zabbix-1.8-export-to-mysql.sql

Upgrade Transient Zabbix 1.8 MySQL Database

Get upgrade script from zabbix20-server-mysql /usr/share/zabbix-mysql/upgrades/2.0/upgrade and copy it to your transient database server. Name it Also copy /usr/share/zabbix-mysql/upgrades/2.0/patch.sql to zabbix-1.8-to-2.0-patch.sql. Edit so that it references zabbix-1.8-to-2.0-patch.sql instead of patch.sql.

  • Run the upgrade
time ./ --user=zabbix18 --password=zabbix18 zabbix18
 WARNING: backup your database before performing upgrade

 This is an UNSUPPORTED Zabbix upgrade script from 1.8 to 2.0 for MySQL
 It does the following things:
  1. Updates indexes that might require changes;
  2. Patches the database from 1.8 schema to 2.0 schema;
  3. Adds 'Disabled' and 'Debug' usergroup if any missing;
  4. Checks for hosts not belonging to any group and adds them to one if any found.

 Usage: pass required MySQL parameters to this script (like database, user, password etc).

Continue ? (y/n) y
Patching the database
 ... patching of the database took 0:01:11
bash >= 4.1 detected
Checking for 'Disabled' user group... found, not doing anything.
Checking for 'Debug' user group... not found, adding.
Checking for hosts not belonging to any group... found.
Group 'none' does not exist, creating it and adding all orphaned hosts to it.
Added 5 hosts to group 'none'. Move them to correct groups manually.
 ... checking for hosts not belonging to any group took 0:00:01

real    1m13.859s
user    0m0.071s
sys     0m0.119s

Progress! Now we have a pruned down MySQL database which has been upgraded to Zabbix 2.0. Let's migrate to PostgreSQL next.

Migrate to PostgreSQL

Dump Transient MySQL Database in PostgreSQL Format

This upgraded database has only our configuration data in it. It has been upgraded to 2.0, so dump it in postgres format.

Be sure not to include the create table syntax because MySQL will use compound primary keys in the create table syntax. This won't fly with PostgreSQL.

  • Dump the upgraded Zabbix 2.0 MySQL database in (more or less) PostgreSQL format:
mysqldump --compact --complete-insert --extended-insert \
 --no-create-db --no-create-info --compatible=postgresql \
 zabbix18 > zabbix-2.0-export-to-postgresql.sql

Create Empty Zabbix 2.0 PostgreSQL DB

On the new server create a Zabbix 2.0 postgres database.

  • Create a zabbix user in postgresql
su - postgres -c 'PGPASSWORD=Zabbix20; yes $PGPASSWORD | createuser -P -SdR zabbix'
  • grant access in pg_hba.conf
# TYPE  DATABASE        USER            ADDRESS                 METHOD
local   zabbix          zabbix                                  md5
host    zabbix          zabbix            md5
  • Create the empty database
echo 'drop database zabbix' | psql -U postgres
echo 'CREATE DATABASE zabbix OWNER zabbix' | psql -U postgres
export PGPASSWORD=Zabbix20 
psql -U zabbix zabbix < /usr/share/zabbix-postgresql/schema.sql

Drop Constraints

The empty PostgreSQL schema we just created has constraints applied to enforce referential integrity. Those are going to be violated by our import and the import will fail, so temporarily remove the constraints.

  • Drop all constraints. Grep CONSTRAINT from schema.sql, and change all those ADDs to DROPs. Write them to zabbix-2.0-postgres-drop-constraints.sql like this:
ALTER TABLE ONLY screens_items DROP CONSTRAINT c_screens_items_1;
  • Drop them
psql -U zabbix zabbix < zabbix-2.0-postgres-drop-constraints.sql

Load MySQL Dump Into PostgreSQL

MySQL and PostgreSQL don't see eye to eye on quotes. Tell PostgreSQL to chill out about the quotes a little bit.

  • Add the following to the top of zabbix-2.0-export-to-postgresql.sql
SET standard_conforming_strings = 'off';
SET backslash_quote = 'on';
  • Load the MySQL export to PostgreSQL.
psql -U zabbix zabbix < zabbix-2.0-export-to-postgresql.sql \
    > zabbix-2.0-import-to-postgresql.log 2>&1
  • There is a problem with loading the old images table. It produces the following error:
# grep ERROR zabbix-2.0-export-to-postgresql.log
ERROR:  invalid byte sequence for encoding "UTF8": 0x89
  • That's fine. Zabbix 2.0 has new images anyway. Load those:
psql -U zabbix zabbix < /usr/share/zabbix-postgresql/images.sql

Add Constraints Back

  • Now grep CONSTRAINT from schema.sql again and apply those statements to add the constraints back.
grep CONSTRAINT /usr/share/zabbix-postgresql/schema.sql \
    > zabbix-2.0-postgres-add-constraints.sql
psql -U zabbix zabbix < zabbix-2.0-postgres-add-constraints.sql

Load Data.sql

This can't be done due to primary key conflicts, so how to get new cool stuff?

  • Todo - How to get new templates, value maps, etc...?

There are some records for new tables that will have to be loaded from data.sql, but since other tables have conflicting records, we have to reconcile. See

Chatting with some Zabbix folk on IRC they think this is fine, and the easiest way to get the new templates etc is go bring up a 2nd Zabbix 2.0 and copy configuration elements using exports and imports.

Data.sql inserts into the following tables (awk '/^INSERT/ {print $3}' < /usr/share/zabbix-postgresql/data.sql | sort -u | sed 's/^/* /' ):

  • actions
  • applications
  • conditions
  • config
  • dchecks
  • drules
  • expressions
  • functions
  • globalmacro
  • graphs
  • graphs_items
  • graph_theme
  • groups
  • help_items
  • hosts
  • hosts_groups
  • hosts_templates
  • interface
  • item_discovery
  • items
  • items_applications
  • mappings
  • media_type
  • operations
  • opgroup
  • opmessage
  • opmessage_grp
  • optemplate
  • regexps
  • screens
  • screens_items
  • scripts
  • sysmaps
  • sysmaps_elements
  • trigger_depends
  • triggers
  • users
  • users_groups
  • usrgrp
  • valuemaps

  • Copy /usr/share/zabbix-postgresql/data.sql to /usr/share/zabbix-postgresql/update-data.sql
  • Find clashes and update instead of insert.

Check Web Interface

  • Start httpd
  • Leave zabbix-server OFF
  • Browse to zabbix and test logins and check contents of the configuration screens.

Enable History Table Partitioning

It seems that table partitioning is the best way to scale up. Without it, the Zabbix housekeeper process eventually gets too slow.

Zabbix 2.0 uses foreign keys which breaks MySQL partitioning. PostgreSQL is an option.

Auto Partitioning with Zabbix 2.0 and Postgresql.

Here is my take on Zabbix and PostgreSQL 9.x (auto) partitioning.

This approach:

  • does not require you to prepare the database to partition it with Zabbix
  • does not require you to create/schedule a cron job for creating the tables in advance
  • seems a bit simpler to implement than other solutions.

It will auto create partitions under the "partition" schema with the following name convention

partitions.tablename_pYYYYMMDD  # for DAILY   partitions 
partitions.tablename_pYYYYMM    # for MONTHLY partitions 

The short story:

psql -U zabbix -d zabbix -f 0-create-partitions-schema.sql
psql -U zabbix -d zabbix -f 1-create-partition_trg-function.sql
psql -U zabbix -d zabbix -f 2-apply-partition_trg.sql
psql -U zabbix -d zabbix -f 3-create-delete_partitions-function.sql

The long story follows.

Create the partitions schema

The partitioned tables will be created under the "partitions" schema, which you can create with:

  • 0-create-partitions-schema.sql
-- by Alessandro De Maria <>

-- Schema: partitions
DROP SCHEMA partitions;
CREATE SCHEMA partitions
psql -U postgres zabbix -f 0-create-partitions-schema.sql

Create the main function with the following code

  • 1-create-partition_trg-function.sql
-- by Alessandro De Maria <>
-- Function: trg_partition()

-- DROP FUNCTION trg_partition();

  RETURNS trigger AS
prefix text := 'partitions.';
timeformat text;
selector text;
_interval interval;
tablename text;
startdate text;
enddate text;
create_table_part text;
create_index_part text;
selector = TG_ARGV[0];
IF selector = 'day' THEN
timeformat := 'YYYY_MM_DD';
ELSIF selector = 'month' THEN
timeformat := 'YYYY_MM';
_interval := '1 ' || selector;
tablename :=  TG_TABLE_NAME || '_p' || to_char(to_timestamp(NEW.clock), timeformat);
EXECUTE 'INSERT INTO ' || prefix || quote_ident(tablename) || ' SELECT ($1).*' USING NEW;
WHEN undefined_table THEN
startdate := extract(epoch FROM date_trunc(selector, to_timestamp(NEW.clock)));
enddate := extract(epoch FROM date_trunc(selector, to_timestamp(NEW.clock) + _interval ));
create_table_part:= 'CREATE TABLE IF NOT EXISTS '|| prefix || quote_ident(tablename) || ' (CHECK ((clock >= ' || quote_literal(startdate) || ' AND clock < ' || quote_literal(enddate) || '))) INHERITS ('|| TG_TABLE_NAME || ')';
create_index_part:= 'CREATE INDEX '|| quote_ident(tablename) || '_1 on ' || prefix || quote_ident(tablename) || '(itemid,clock)';
EXECUTE create_table_part;
EXECUTE create_index_part;
--insert it again
EXECUTE 'INSERT INTO ' || prefix || quote_ident(tablename) || ' SELECT ($1).*' USING NEW;
  COST 100;
ALTER FUNCTION trg_partition()
  OWNER TO postgres;
  • Create partioning function
psql -U postgres zabbix -f 1-create-partition_trg-function.sql

Create a trigger for each (clock based) table you want to partition

  • 2-apply-partition_trg.sql
-- by Alessandro De Maria <>
CREATE TRIGGER partition_trg BEFORE INSERT ON history           FOR EACH ROW EXECUTE PROCEDURE trg_partition('day');
CREATE TRIGGER partition_trg BEFORE INSERT ON history_sync      FOR EACH ROW EXECUTE PROCEDURE trg_partition('day');
CREATE TRIGGER partition_trg BEFORE INSERT ON history_uint      FOR EACH ROW EXECUTE PROCEDURE trg_partition('day');
CREATE TRIGGER partition_trg BEFORE INSERT ON history_str_sync  FOR EACH ROW EXECUTE PROCEDURE trg_partition('day');
CREATE TRIGGER partition_trg BEFORE INSERT ON history_log       FOR EACH ROW EXECUTE PROCEDURE trg_partition('day');
CREATE TRIGGER partition_trg BEFORE INSERT ON trends            FOR EACH ROW EXECUTE PROCEDURE trg_partition('month');
CREATE TRIGGER partition_trg BEFORE INSERT ON trends_uint       FOR EACH ROW EXECUTE PROCEDURE trg_partition('month');
  • Add trigger to history and trends tables. Only tables with a clock field.
psql -U postgres zabbix -f 2-apply-partition_trg.sql

Disable partitioning

Should you want to remove the partitioning, just remove the partition_trg from each table, or run the following

-- by Alessandro De Maria <>
DROP TRIGGER partition_trg ON history;
DROP TRIGGER partition_trg ON history_sync;
DROP TRIGGER partition_trg ON history_uint;
DROP TRIGGER partition_trg ON history_str_sync;
DROP TRIGGER partition_trg ON history_log;
DROP TRIGGER partition_trg ON trends;
DROP TRIGGER partition_trg ON trends_uint;

Remove unwanted old partitions

The following optional routine is to delete partitions older than the desired time.

  • 3-create-delete_partitions-function.sql
-- by Alessandro De Maria <>
-- Function: delete_partitions(interval, text)

-- DROP FUNCTION delete_partitions(interval, text);

CREATE OR REPLACE FUNCTION delete_partitions(intervaltodelete interval, tabletype text)
result record ;
prefix text := 'partitions.';
table_timestamp timestamp;
delete_before_date date;
tablename text;

    FOR result IN SELECT * FROM pg_tables WHERE schemaname = 'partitions' LOOP

        table_timestamp := to_timestamp(substring(result.tablename from '[0-9_]*$'), 'YYYY_MM_DD');
        delete_before_date := date_trunc('day', NOW() - intervalToDelete);
        tablename := result.tablename;

    -- Was it called properly?
        IF tabletype != 'month' AND tabletype != 'day' THEN
	    RAISE EXCEPTION 'Please specify "month" or "day" instead of %', tabletype;
        END IF;

    --Check whether the table name has a day (YYYY_MM_DD) or month (YYYY_MM) format
        IF length(substring(result.tablename from '[0-9_]*$')) = 10 AND tabletype = 'month' THEN
            --This is a daily partition YYYY_MM_DD
            -- RAISE NOTICE 'Skipping table % when trying to delete "%" partitions (%)', result.tablename, tabletype, length(substring(result.tablename from '[0-9_]*$'));
        ELSIF length(substring(result.tablename from '[0-9_]*$')) = 7 AND tabletype = 'day' THEN
            --this is a monthly partition
            --RAISE NOTICE 'Skipping table % when trying to delete "%" partitions (%)', result.tablename, tabletype, length(substring(result.tablename from '[0-9_]*$'));
            --This is the correct table type. Go ahead and check if it needs to be deleted
	    --RAISE NOTICE 'Checking table %', result.tablename;
        END IF;

	IF table_timestamp <= delete_before_date THEN
		RAISE NOTICE 'Deleting table %', quote_ident(tablename);
		EXECUTE 'DROP TABLE ' || prefix || quote_ident(tablename) || ';';


  COST 100;
ALTER FUNCTION delete_partitions(interval, text)
  OWNER TO postgres;
  • Create delete_partions function.
psql -U postgres zabbix -f 3-create-delete_partitions-function.sql

Cron to purge partitions

You can then remove old partition using the following commands

  • 4-purge-partitions-cron.sql
-- by Alessandro De Maria <>
SELECT delete_partitions('30 days', 'day');
SELECT delete_partitions('12 months', 'month');
  • Call from crontab daily.
psql -U postgres zabbix -f 4-purge-partitions-cron.sql

Original thread

Postgres 9.x Autopartitioning

Disable Housekeeper

Since we are auto-purging the history and trends, disable the housekeeper process.

  • Set in zabbix_server.conf