pg_dump Performance

pg_dump Performance

am 12.04.2008 04:41:53 von Ryan Wells

We're having what seem like serious performance issues with pg_dump, and
I hope someone can help.

We have several tables that are used to store binary data as bytea (in
this example image files), but we're having similar time issues with
text tables as well.

In my most recent test, the sample table was about 5 GB in 1644 rows,
with image files sizes between 1 MB and 35 MB. The server was a 3.0 GHz
P4 running WinXP, with 2 GB of ram, the backup stored to a separate disk
from the data, and little else running on the sytem.

We're doing the following:
=20
pg_dump -i -h localhost -p 5432 -U postgres -F c -v -f
"backupTest.backup" -t "public"."images" db_name

In the test above, this took 1hr 45min to complete. Since we expect to
have users with 50-100GB of data, if not more, backup times that take
nearly an entire day are unacceptable.

We think there must be something we're doing wrong, but none of our
searches have turned up anything (we likely just don't know the right
search terms). Hopefully, either there's a server setting or pg_dump
option we need to change, but we're open to design changes if necessary.

Can anyone who has dealt with this before advise us?

Thanks!
Ryan


=20



--=20
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: pg_dump Performance

am 18.04.2008 17:45:13 von Ryan Wells

Sorry for the double post. Our email server had some problems
overnight. Feel free to ignore this. We're still working on the issue
using suggestions from last week, and we're seeing some improvements.

Ryan=20

-----Original Message-----
From: Ryan Wells=20
Sent: Friday, April 11, 2008 9:42 PM
To: pgsql-admin@postgresql.org
Subject: pg_dump Performance


We're having what seem like serious performance issues with pg_dump, and
I hope someone can help.

We have several tables that are used to store binary data as bytea (in
this example image files), but we're having similar time issues with
text tables as well.

In my most recent test, the sample table was about 5 GB in 1644 rows,
with image files sizes between 1 MB and 35 MB. The server was a 3.0 GHz
P4 running WinXP, with 2 GB of ram, the backup stored to a separate disk
from the data, and little else running on the sytem.

We're doing the following:
=20
pg_dump -i -h localhost -p 5432 -U postgres -F c -v -f
"backupTest.backup" -t "public"."images" db_name

In the test above, this took 1hr 45min to complete. Since we expect to
have users with 50-100GB of data, if not more, backup times that take
nearly an entire day are unacceptable.

We think there must be something we're doing wrong, but none of our
searches have turned up anything (we likely just don't know the right
search terms). Hopefully, either there's a server setting or pg_dump
option we need to change, but we're open to design changes if necessary.

Can anyone who has dealt with this before advise us?

Thanks!
Ryan


=20



--=20
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin