Asynchronous commit | Transaction loss at server crash
am 20.05.2010 18:54:56 von Balkrishna Sharma--_39c10c80-ad94-4ad4-b2cd-72cf5ee54f31_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hello=2CCouple of questions:1. For the 'Asynchronous commit' mode=2C I know=
that WAL transactions not flushed to permanent storage will be lost in ev=
ent of a server crash. Is it possible to know what were the non-flushed tra=
nsactions that were lost=2C in any shape/form/part ? I guess not=2C but wan=
ted to confirm.
2. If above is true=2C then for my application 'Asynchronous commit' is not=
an option. In that case=2C how is it possible to increase the speed of 'Sy=
nchronous Commit' ? Can a SDD rather than HDD make a difference ? Can throw=
ing RAM have an impact ? Is there some test somewhere of how much RAM will =
help to beef up the write process (for synch commit).
I need to support several hundreds of concurrent update/inserts from an onl=
ine form with pretty low latency (maybe couple of milliseconds at max). Thi=
nk of a save to database at every 'tab-out' in an online form.
Thanks=2C-Bala
=20
____________________________________________________________ _____
The New Busy is not the old busy. Search=2C chat and e-mail from your inbox=
..
http://www.windowslive.com/campaign/thenewbusy?ocid=3DPID283 26::T:WLMTAGL:O=
N:WL:en-US:WM_HMP:042010_3=
--_39c10c80-ad94-4ad4-b2cd-72cf5ee54f31_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
nous commit' mode=2C I know that WAL transactions not flushed to permanent =
storage will be  =3Blost in event of a server crash. Is it possible to =
know what were the non-flushed transactions that were lost=2C in any shape/=
form/part ? I guess not=2C but wanted to confirm.
s not an option. In that case=2C how is it possible to increase the speed o=
f 'Synchronous Commit' ? Can a SDD rather than HDD make a difference ? Can =
throwing RAM have an impact ? Is there some test somewhere of how much RAM =
will help to beef up the write process (for synch commit).
=
div>
om an online form with pretty low latency (maybe couple of milliseconds at =
max). Think of a save to database at every 'tab-out' in an online form. v>
The New Busy is not the old busy. Search=2C chat and=
e-mail from your inbox. newbusy?ocid=3DPID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010 _3' target=3D'_=
new'>Get started.
=
--_39c10c80-ad94-4ad4-b2cd-72cf5ee54f31_--
Re: Asynchronous commit | Transaction loss at server crash
am 20.05.2010 19:35:39 von Scott MarloweOn Thu, May 20, 2010 at 10:54 AM, Balkrishna Sharma
e:
> Hello,
> Couple of questions:
> 1. For the 'Asynchronous commit' mode, I know that WAL transactions not
> flushed to permanent storage will be =A0lost in event of a server crash. =
Is it
That's not exactly correct. Transactions that haven't been written to
WAL may be lost. This would be a small number of transactions.
Transactions written to the WAL but not to the main data store will
NOT be lost.
However, this may still not be an acceptable case for your usage.
--=20
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Re: Asynchronous commit | Transaction loss at server crash
am 20.05.2010 19:36:31 von Scott MarloweOn Thu, May 20, 2010 at 10:54 AM, Balkrishna Sharma
> I need to support several hundreds of concurrent update/inserts from an
> online form with pretty low latency (maybe couple of milliseconds at max).
> Think of a save to database at every 'tab-out' in an online form.
You can get nearly the same performance by using a RAID controller
with battery backed cache without the same danger of losing
transactions.
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
installation on Sun Solaris for version 8.4
am 20.05.2010 19:56:46 von Sherry.CTR.ZhuThis is a multipart message in MIME format.
--=_alternative 0062962D85257729_=
Content-Type: text/plain; charset="US-ASCII"
All,
I downloaded the file for Sun solaris 8.4 version, and extracted. Can
someone tell me where the configure script is? Which unix account should
run this script? You help is very appreciated.
15.5. Installation Procedure
1. Configuration
The first step of the installation procedure is to configure the source
tree for your system and choose the options you would like. This is done
by running the configure script. For a default installation simply enter:
../configure
This script will run a number of tests to determine values for various
system dependent variables and detect any quirks of your operating system,
and finally will create several files in the build tree to record what it
found. (You can also run configure in a directory outside the source tree
if you want to keep the build directory separate.)
The default configuration will build the server and utilities, as well as
all client applications and interfaces that require only a C compiler. All
files will be installed under /usr/local/pgsql by default.
You can customize the build and installation process by supplying one or
more of the following command line options to configure:
--prefix=PREFIX
Thanks much!
Xuefeng Zhu (Sherry)
Crown Consulting Inc. -- Oracle DBA
AIM Lab Data Team
(703) 925-3192
--=_alternative 0062962D85257729_=
Content-Type: text/html; charset="US-ASCII"
All,
I downloaded the file for Sun
solaris 8.4 version, and extracted. Can someone tell me where
the configure script is? Which unix account should run this script?
You help is very appreciated.
15.5. Installation Procedure
1. Configuration
The first step of the installation procedure is to configure
the source tree for your system and choose the options you would like.
This is done by running the configure
script. For a default installation simply enter:
./configure
This script will run a number of tests to determine values
for various system dependent variables and detect any quirks of your operating
system, and finally will create several files in the build tree to record
what it found. (You can also run configure
in a directory outside the source tree if you want to keep the build directory
separate.)
The default configuration will build the server and utilities,
as well as all client applications and interfaces that require only a C
compiler. All files will be installed under /usr/local/pgsql
by default.
You can customize the build and installation process by
supplying one or more of the following command line options to configure:
--prefix=PREFIX
Thanks much!
Xuefeng Zhu (Sherry)
Crown Consulting Inc. -- Oracle DBA
AIM Lab Data Team
(703) 925-3192
--=_alternative 0062962D85257729_=--
Re: installation on Sun Solaris for version 8.4
am 20.05.2010 20:45:47 von Scott MarloweOn Thu, May 20, 2010 at 11:56 AM,
>
> All,
>
> =A0 I downloaded the file for Sun solaris 8.4 version, and extracted. =A0=
Can
> someone tell me where the configure script is? =A0Which unix account shou=
ld
> run this script? =A0You help is very appreciated.
It looks like you're compiling from source so you can run this from
any account really.
../configure
make
sudo make install
After that you can create a service account for it (just a regular
user account is fine really) and use that to run initdb and pg_ctl
sudo adduser postgres
sudo mkdir /usr/local/pgsql/data
sudo chown postgres.postgres /usr/local/pgsql/data
sudo su - postgres
initdb -D /usr/local/pgsql/data
pg_ctl -D /usr/local/pgsql/data start
OR something like that. I'm a RedHat / Ubuntu guy so I'm not sure
what command in Solaris is used to create an account, but I'm sure you
do, so just substitute it up there where I ran adduser.
--=20
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Re: installation on Sun Solaris for version 8.4
am 20.05.2010 20:46:24 von Scott MarloweAnd don't include
pgsql-admin-owner@postgresql.org
in your cc list etc... Just pgsql-admin is plenty.
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Re: Asynchronous commit | Transaction loss at server crash
am 20.05.2010 21:02:07 von Balkrishna Sharma--_e238967c-9db1-4079-9853-38119d5bd83c_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Good suggestion. Thanks.What's your take on SSD ? I read somewhere that mov=
ing the WAL to SSD helps as well.
> Date: Thu=2C 20 May 2010 11:36:31 -0600
> Subject: Re: [ADMIN] Asynchronous commit | Transaction loss at server cra=
sh
> From: scott.marlowe@gmail.com
> To: b_ki@hotmail.com
> CC: pgsql-admin@postgresql.org
>=20
> On Thu=2C May 20=2C 2010 at 10:54 AM=2C Balkrishna Sharma
> > I need to support several hundreds of concurrent update/inserts from an
> > online form with pretty low latency (maybe couple of milliseconds at ma=
x).
> > Think of a save to database at every 'tab-out' in an online form.
>=20
> You can get nearly the same performance by using a RAID controller
> with battery backed cache without the same danger of losing
> transactions.
>=20
> --=20
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
=20
____________________________________________________________ _____
The New Busy is not the too busy. Combine all your e-mail accounts with Hot=
mail.
http://www.windowslive.com/campaign/thenewbusy?tile=3Dmultia ccount&ocid=3DP=
ID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_4=
--_e238967c-9db1-4079-9853-38119d5bd83c_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Good suggestion. Thanks.
t moving the WAL to SSD helps as well.
>=3B Date: Thu=2C 20 May 20=
10 11:36:31 -0600
>=3B Subject: Re: [ADMIN] Asynchronous commit | Tran=
saction loss at server crash
>=3B From: scott.marlowe@gmail.com
>=
=3B To: b_ki@hotmail.com
>=3B CC: pgsql-admin@postgresql.org
>=3B=
>=3B On Thu=2C May 20=2C 2010 at 10:54 AM=2C Balkrishna Sharma <=
=3Bb_ki@hotmail.com>=3B wrote:
>=3B >=3B I need to support several=
hundreds of concurrent update/inserts from an
>=3B >=3B online form=
with pretty low latency (maybe couple of milliseconds at max).
>=3B &=
gt=3B Think of a save to database at every 'tab-out' in an online form.
=
>=3B
>=3B You can get nearly the same performance by using a RAID c=
ontroller
>=3B with battery backed cache without the same danger of lo=
sing
>=3B transactions.
>=3B
>=3B --
>=3B Sent via pg=
sql-admin mailing list (pgsql-admin@postgresql.org)
>=3B To make chang=
es to your subscription:
>=3B http://www.postgresql.org/mailpref/pgsql=
-admin
The New Busy is not the too busy. Combine all your =
e-mail accounts with Hotmail. n/thenewbusy?tile=3Dmultiaccount&ocid=3DPID28326::T:WLMTAGL: ON:WL:en-US:WM_=
HMP:042010_4' target=3D'_new'>Get busy.
=
--_e238967c-9db1-4079-9853-38119d5bd83c_--
Re: Asynchronous commit | Transaction loss at server crash
am 20.05.2010 21:35:54 von Scott MarloweSSD and battery backed cache kind of do the same thing, in that they
reduce random access times close to 0. However, most SSDs are still
not considered reliable due to their internal caching systems. hard
drives and bbu RAID are proven solutions, SSD is still not really
there just yet in terms of being proven reliable.
On Thu, May 20, 2010 at 1:02 PM, Balkrishna Sharma
> Good suggestion. Thanks.
> What's your take on SSD ? I read somewhere that moving the WAL to SSD helps
> as well.
>
>> Date: Thu, 20 May 2010 11:36:31 -0600
>> Subject: Re: [ADMIN] Asynchronous commit | Transaction loss at server
>> crash
>> From: scott.marlowe@gmail.com
>> To: b_ki@hotmail.com
>> CC: pgsql-admin@postgresql.org
>>
>> On Thu, May 20, 2010 at 10:54 AM, Balkrishna Sharma
>> wrote:
>> > I need to support several hundreds of concurrent update/inserts from an
>> > online form with pretty low latency (maybe couple of milliseconds at
>> > max).
>> > Think of a save to database at every 'tab-out' in an online form.
>>
>> You can get nearly the same performance by using a RAID controller
>> with battery backed cache without the same danger of losing
>> transactions.
>>
>> --
>> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-admin
>
> ________________________________
> The New Busy is not the too busy. Combine all your e-mail accounts with
> Hotmail. Get busy.
--
When fascism comes to America, it will be intolerance sold as diversity.
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Re: Asynchronous commit | Transaction loss at server crash
am 20.05.2010 22:10:19 von Balkrishna Sharma--_57eafb11-3f16-44b0-b4a9-45c91c68003a_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
What if we don't rely on the cache of SSD=2C i.e. have write-through settin=
g and not write-back. Is the performance gain then not significant to justi=
fy SSD ?
> Date: Thu=2C 20 May 2010 13:35:54 -0600
> Subject: Re: [ADMIN] Asynchronous commit | Transaction loss at server cra=
sh
> From: scott.marlowe@gmail.com
> To: b_ki@hotmail.com
> CC: pgsql-admin@postgresql.org
>=20
> SSD and battery backed cache kind of do the same thing=2C in that they
> reduce random access times close to 0. However=2C most SSDs are still
> not considered reliable due to their internal caching systems. hard
> drives and bbu RAID are proven solutions=2C SSD is still not really
> there just yet in terms of being proven reliable.
>=20
> On Thu=2C May 20=2C 2010 at 1:02 PM=2C Balkrishna Sharma
> > Good suggestion. Thanks.
> > What's your take on SSD ? I read somewhere that moving the WAL to SSD h=
elps
> > as well.
> >
> >> Date: Thu=2C 20 May 2010 11:36:31 -0600
> >> Subject: Re: [ADMIN] Asynchronous commit | Transaction loss at server
> >> crash
> >> From: scott.marlowe@gmail.com
> >> To: b_ki@hotmail.com
> >> CC: pgsql-admin@postgresql.org
> >>
> >> On Thu=2C May 20=2C 2010 at 10:54 AM=2C Balkrishna Sharma
> >> wrote:
> >> > I need to support several hundreds of concurrent update/inserts from=
an
> >> > online form with pretty low latency (maybe couple of milliseconds at
> >> > max).
> >> > Think of a save to database at every 'tab-out' in an online form.
> >>
> >> You can get nearly the same performance by using a RAID controller
> >> with battery backed cache without the same danger of losing
> >> transactions.
> >>
> >> --
> >> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> >> To make changes to your subscription:
> >> http://www.postgresql.org/mailpref/pgsql-admin
> >
> > ________________________________
> > The New Busy is not the too busy. Combine all your e-mail accounts with
> > Hotmail. Get busy.
>=20
>=20
>=20
> --=20
> When fascism comes to America=2C it will be intolerance sold as diversity=
..
>=20
> --=20
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
=20
____________________________________________________________ _____
The New Busy is not the old busy. Search=2C chat and e-mail from your inbox=
..
http://www.windowslive.com/campaign/thenewbusy?ocid=3DPID283 26::T:WLMTAGL:O=
N:WL:en-US:WM_HMP:042010_3=
--_57eafb11-3f16-44b0-b4a9-45c91c68003a_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
What if we don't rely on the cache of SSD=2C i.e. have write-through settin=
g and not write-back. Is the performance gain then not significant to justi=
fy SSD ?
>=3B Date: Thu=2C 20 May 2010 13:35:54 -0600
>=3B Su=
bject: Re: [ADMIN] Asynchronous commit | Transaction loss at server crash r>>=3B From: scott.marlowe@gmail.com
>=3B To: b_ki@hotmail.com
&g=
t=3B CC: pgsql-admin@postgresql.org
>=3B
>=3B SSD and battery ba=
cked cache kind of do the same thing=2C in that they
>=3B reduce rando=
m access times close to 0. However=2C most SSDs are still
>=3B not co=
nsidered reliable due to their internal caching systems. hard
>=3B dr=
ives and bbu RAID are proven solutions=2C SSD is still not really
>=3B=
there just yet in terms of being proven reliable.
>=3B
>=3B On =
Thu=2C May 20=2C 2010 at 1:02 PM=2C Balkrishna Sharma <=3Bb_ki@hotmail.co=
m>=3B wrote:
>=3B >=3B Good suggestion. Thanks.
>=3B >=3B W=
hat's your take on SSD ? I read somewhere that moving the WAL to SSD helps<=
br>>=3B >=3B as well.
>=3B >=3B
>=3B >=3B>=3B Date: Thu=
=2C 20 May 2010 11:36:31 -0600
>=3B >=3B>=3B Subject: Re: [ADMIN] =
Asynchronous commit | Transaction loss at server
>=3B >=3B>=3B cra=
sh
>=3B >=3B>=3B From: scott.marlowe@gmail.com
>=3B >=3B>=
=3B To: b_ki@hotmail.com
>=3B >=3B>=3B CC: pgsql-admin@postgresql.=
org
>=3B >=3B>=3B
>=3B >=3B>=3B On Thu=2C May 20=2C 2010 =
at 10:54 AM=2C Balkrishna Sharma <=3Bb_ki@hotmail.com>=3B
>=3B >=
=3B>=3B wrote:
>=3B >=3B>=3B >=3B I need to support several hu=
ndreds of concurrent update/inserts from an
>=3B >=3B>=3B >=3B o=
nline form with pretty low latency (maybe couple of milliseconds at
>=
=3B >=3B>=3B >=3B max).
>=3B >=3B>=3B >=3B Think of a save=
to database at every 'tab-out' in an online form.
>=3B >=3B>=3B r>>=3B >=3B>=3B You can get nearly the same performance by using a RA=
ID controller
>=3B >=3B>=3B with battery backed cache without the =
same danger of losing
>=3B >=3B>=3B transactions.
>=3B >=3B=
>=3B
>=3B >=3B>=3B --
>=3B >=3B>=3B Sent via pgsql-admi=
n mailing list (pgsql-admin@postgresql.org)
>=3B >=3B>=3B To make =
changes to your subscription:
>=3B >=3B>=3B http://www.postgresql.=
org/mailpref/pgsql-admin
>=3B >=3B
>=3B >=3B ________________=
________________
>=3B >=3B The New Busy is not the too busy. Combine=
all your e-mail accounts with
>=3B >=3B Hotmail. Get busy.
>=
=3B
>=3B
>=3B
>=3B --
>=3B When fascism comes to Am=
erica=2C it will be intolerance sold as diversity.
>=3B
>=3B -- =
>=3B Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) r>>=3B To make changes to your subscription:
>=3B http://www.postgre=
sql.org/mailpref/pgsql-admin
The New Busy is not=
the old busy. Search=2C chat and e-mail from your inbox. /www.windowslive.com/campaign/thenewbusy?ocid=3DPID28326::T: WLMTAGL:ON:WL:e=
n-US:WM_HMP:042010_3' target=3D'_new'>Get started.
=
--_57eafb11-3f16-44b0-b4a9-45c91c68003a_--
Re: Asynchronous commit | Transaction loss at server crash
am 20.05.2010 22:12:33 von Scott MarloweThe design of SSD is such that it cannot run without caching. It has
to cache to arrange things to be written out due to issues with the
fact that it cannot write small blocks one at a time but needs to
write large chunks together at once.
On Thu, May 20, 2010 at 2:10 PM, Balkrishna Sharma
> What if we don't rely on the cache of SSD, i.e. have write-through setting
> and not write-back. Is the performance gain then not significant to justify
> SSD ?
>
>> Date: Thu, 20 May 2010 13:35:54 -0600
>> Subject: Re: [ADMIN] Asynchronous commit | Transaction loss at server
>> crash
>> From: scott.marlowe@gmail.com
>> To: b_ki@hotmail.com
>> CC: pgsql-admin@postgresql.org
>>
>> SSD and battery backed cache kind of do the same thing, in that they
>> reduce random access times close to 0. However, most SSDs are still
>> not considered reliable due to their internal caching systems. hard
>> drives and bbu RAID are proven solutions, SSD is still not really
>> there just yet in terms of being proven reliable.
>>
>> On Thu, May 20, 2010 at 1:02 PM, Balkrishna Sharma
>> wrote:
>> > Good suggestion. Thanks.
>> > What's your take on SSD ? I read somewhere that moving the WAL to SSD
>> > helps
>> > as well.
>> >
>> >> Date: Thu, 20 May 2010 11:36:31 -0600
>> >> Subject: Re: [ADMIN] Asynchronous commit | Transaction loss at server
>> >> crash
>> >> From: scott.marlowe@gmail.com
>> >> To: b_ki@hotmail.com
>> >> CC: pgsql-admin@postgresql.org
>> >>
>> >> On Thu, May 20, 2010 at 10:54 AM, Balkrishna Sharma
>> >> wrote:
>> >> > I need to support several hundreds of concurrent update/inserts from
>> >> > an
>> >> > online form with pretty low latency (maybe couple of milliseconds at
>> >> > max).
>> >> > Think of a save to database at every 'tab-out' in an online form.
>> >>
>> >> You can get nearly the same performance by using a RAID controller
>> >> with battery backed cache without the same danger of losing
>> >> transactions.
>> >>
>> >> --
>> >> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
>> >> To make changes to your subscription:
>> >> http://www.postgresql.org/mailpref/pgsql-admin
>> >
>> > ________________________________
>> > The New Busy is not the too busy. Combine all your e-mail accounts with
>> > Hotmail. Get busy.
>>
>>
>>
>> --
>> When fascism comes to America, it will be intolerance sold as diversity.
>>
>> --
>> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-admin
>
> ________________________________
> The New Busy is not the old busy. Search, chat and e-mail from your inbox.
> Get started.
--
When fascism comes to America, it will be intolerance sold as diversity.
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Re: Asynchronous commit | Transaction loss at server crash
am 20.05.2010 22:26:42 von Balkrishna Sharma--_ae8c14c1-f39e-4047-a7a5-e4e7a599d46b_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
But if we have write-through setting=2C failure before the cache can write =
to disk will result in incomplete transaction (i.e. host will know that the=
transaction was incomplete). Right ?
Two things I need for my system is:1. Unsuccessful transactions with a noti=
fication back that it is unsuccessful is ok but telling it is a successful =
transaction and not being able to write to database is not acceptable (ever=
).2. My write time (random access time) should be as minimal as possible.
Can a SSD with write-thru cache achieve this ?
Thanks for your inputs.
> Date: Thu=2C 20 May 2010 14:12:33 -0600
> Subject: Re: [ADMIN] Asynchronous commit | Transaction loss at server cra=
sh
> From: scott.marlowe@gmail.com
> To: b_ki@hotmail.com
> CC: pgsql-admin@postgresql.org
>=20
> The design of SSD is such that it cannot run without caching. It has
> to cache to arrange things to be written out due to issues with the
> fact that it cannot write small blocks one at a time but needs to
> write large chunks together at once.
>=20
> On Thu=2C May 20=2C 2010 at 2:10 PM=2C Balkrishna Sharma
> > What if we don't rely on the cache of SSD=2C i.e. have write-through se=
tting
> > and not write-back. Is the performance gain then not significant to jus=
tify
> > SSD ?
> >
> >> Date: Thu=2C 20 May 2010 13:35:54 -0600
> >> Subject: Re: [ADMIN] Asynchronous commit | Transaction loss at server
> >> crash
> >> From: scott.marlowe@gmail.com
> >> To: b_ki@hotmail.com
> >> CC: pgsql-admin@postgresql.org
> >>
> >> SSD and battery backed cache kind of do the same thing=2C in that they
> >> reduce random access times close to 0. However=2C most SSDs are still
> >> not considered reliable due to their internal caching systems. hard
> >> drives and bbu RAID are proven solutions=2C SSD is still not really
> >> there just yet in terms of being proven reliable.
> >>
> >> On Thu=2C May 20=2C 2010 at 1:02 PM=2C Balkrishna Sharma
> >> wrote:
> >> > Good suggestion. Thanks.
> >> > What's your take on SSD ? I read somewhere that moving the WAL to SS=
D
> >> > helps
> >> > as well.
> >> >
> >> >> Date: Thu=2C 20 May 2010 11:36:31 -0600
> >> >> Subject: Re: [ADMIN] Asynchronous commit | Transaction loss at serv=
er
> >> >> crash
> >> >> From: scott.marlowe@gmail.com
> >> >> To: b_ki@hotmail.com
> >> >> CC: pgsql-admin@postgresql.org
> >> >>
> >> >> On Thu=2C May 20=2C 2010 at 10:54 AM=2C Balkrishna Sharma
> >> >> wrote:
> >> >> > I need to support several hundreds of concurrent update/inserts f=
rom
> >> >> > an
> >> >> > online form with pretty low latency (maybe couple of milliseconds=
at
> >> >> > max).
> >> >> > Think of a save to database at every 'tab-out' in an online form.
> >> >>
> >> >> You can get nearly the same performance by using a RAID controller
> >> >> with battery backed cache without the same danger of losing
> >> >> transactions.
> >> >>
> >> >> --
> >> >> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> >> >> To make changes to your subscription:
> >> >> http://www.postgresql.org/mailpref/pgsql-admin
> >> >
> >> > ________________________________
> >> > The New Busy is not the too busy. Combine all your e-mail accounts w=
ith
> >> > Hotmail. Get busy.
> >>
> >>
> >>
> >> --
> >> When fascism comes to America=2C it will be intolerance sold as divers=
ity.
> >>
> >> --
> >> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> >> To make changes to your subscription:
> >> http://www.postgresql.org/mailpref/pgsql-admin
> >
> > ________________________________
> > The New Busy is not the old busy. Search=2C chat and e-mail from your i=
nbox.
> > Get started.
>=20
>=20
>=20
> --=20
> When fascism comes to America=2C it will be intolerance sold as diversity=
..
>=20
> --=20
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
=20
____________________________________________________________ _____
Hotmail is redefining busy with tools for the New Busy. Get more from your =
inbox.
http://www.windowslive.com/campaign/thenewbusy?ocid=3DPID283 26::T:WLMTAGL:O=
N:WL:en-US:WM_HMP:042010_2=
--_ae8c14c1-f39e-4047-a7a5-e4e7a599d46b_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
rite to disk will result in incomplete transaction (i.e. host will know tha=
t the transaction was incomplete). Right ?
ngs I need for my system is:
notification back that it is unsuccessful is ok but telling it is a =3B=
successful =3Btransaction and not being able to write to database is no=
t acceptable (ever).
be as minimal as possible.
hru cache achieve this ?
div>
>=3B Date: Thu=2C 20 May 2010 14:12:33 -0600<=
br>>=3B Subject: Re: [ADMIN] Asynchronous commit | Transaction loss at se=
rver crash
>=3B From: scott.marlowe@gmail.com
>=3B To: b_ki@hotma=
il.com
>=3B CC: pgsql-admin@postgresql.org
>=3B
>=3B The de=
sign of SSD is such that it cannot run without caching. It has
>=3B t=
o cache to arrange things to be written out due to issues with the
>=
=3B fact that it cannot write small blocks one at a time but needs to
&g=
t=3B write large chunks together at once.
>=3B
>=3B On Thu=2C Ma=
y 20=2C 2010 at 2:10 PM=2C Balkrishna Sharma <=3Bb_ki@hotmail.com>=3B w=
rote:
>=3B >=3B What if we don't rely on the cache of SSD=2C i.e. ha=
ve write-through setting
>=3B >=3B and not write-back. Is the perfor=
mance gain then not significant to justify
>=3B >=3B SSD ?
>=3B=
>=3B
>=3B >=3B>=3B Date: Thu=2C 20 May 2010 13:35:54 -0600
&=
gt=3B >=3B>=3B Subject: Re: [ADMIN] Asynchronous commit | Transaction l=
oss at server
>=3B >=3B>=3B crash
>=3B >=3B>=3B From: sco=
tt.marlowe@gmail.com
>=3B >=3B>=3B To: b_ki@hotmail.com
>=3B =
>=3B>=3B CC: pgsql-admin@postgresql.org
>=3B >=3B>=3B
>=
=3B >=3B>=3B SSD and battery backed cache kind of do the same thing=2C =
in that they
>=3B >=3B>=3B reduce random access times close to 0. =
However=2C most SSDs are still
>=3B >=3B>=3B not considered reliab=
le due to their internal caching systems. hard
>=3B >=3B>=3B drive=
s and bbu RAID are proven solutions=2C SSD is still not really
>=3B &g=
t=3B>=3B there just yet in terms of being proven reliable.
>=3B >=
=3B>=3B
>=3B >=3B>=3B On Thu=2C May 20=2C 2010 at 1:02 PM=2C Bal=
krishna Sharma <=3Bb_ki@hotmail.com>=3B
>=3B >=3B>=3B wrote: r>>=3B >=3B>=3B >=3B Good suggestion. Thanks.
>=3B >=3B>=
=3B >=3B What's your take on SSD ? I read somewhere that moving the WAL t=
o SSD
>=3B >=3B>=3B >=3B helps
>=3B >=3B>=3B >=3B as =
well.
>=3B >=3B>=3B >=3B
>=3B >=3B>=3B >=3B>=3B Dat=
e: Thu=2C 20 May 2010 11:36:31 -0600
>=3B >=3B>=3B >=3B>=3B Su=
bject: Re: [ADMIN] Asynchronous commit | Transaction loss at server
>=
=3B >=3B>=3B >=3B>=3B crash
>=3B >=3B>=3B >=3B>=3B Fro=
m: scott.marlowe@gmail.com
>=3B >=3B>=3B >=3B>=3B To: b_ki@hot=
mail.com
>=3B >=3B>=3B >=3B>=3B CC: pgsql-admin@postgresql.org=
>=3B >=3B>=3B >=3B>=3B
>=3B >=3B>=3B >=3B>=3B On=
Thu=2C May 20=2C 2010 at 10:54 AM=2C Balkrishna Sharma <=3Bb_ki@hotmail.=
com>=3B
>=3B >=3B>=3B >=3B>=3B wrote:
>=3B >=3B>=3B=
>=3B>=3B >=3B I need to support several hundreds of concurrent updat=
e/inserts from
>=3B >=3B>=3B >=3B>=3B >=3B an
>=3B >=
=3B>=3B >=3B>=3B >=3B online form with pretty low latency (maybe co=
uple of milliseconds at
>=3B >=3B>=3B >=3B>=3B >=3B max).
>>=3B >=3B>=3B >=3B>=3B >=3B Think of a save to database at eve=
ry 'tab-out' in an online form.
>=3B >=3B>=3B >=3B>=3B
>=
=3B >=3B>=3B >=3B>=3B You can get nearly the same performance by us=
ing a RAID controller
>=3B >=3B>=3B >=3B>=3B with battery back=
ed cache without the same danger of losing
>=3B >=3B>=3B >=3B>=
=3B transactions.
>=3B >=3B>=3B >=3B>=3B
>=3B >=3B>=
=3B >=3B>=3B --
>=3B >=3B>=3B >=3B>=3B Sent via pgsql-admi=
n mailing list (pgsql-admin@postgresql.org)
>=3B >=3B>=3B >=3B&g=
t=3B To make changes to your subscription:
>=3B >=3B>=3B >=3B>=
=3B http://www.postgresql.org/mailpref/pgsql-admin
>=3B >=3B>=3B &=
gt=3B
>=3B >=3B>=3B >=3B ________________________________
>=
=3B >=3B>=3B >=3B The New Busy is not the too busy. Combine all your =
e-mail accounts with
>=3B >=3B>=3B >=3B Hotmail. Get busy.
&g=
t=3B >=3B>=3B
>=3B >=3B>=3B
>=3B >=3B>=3B
>=3B &=
gt=3B>=3B --
>=3B >=3B>=3B When fascism comes to America=2C it w=
ill be intolerance sold as diversity.
>=3B >=3B>=3B
>=3B >=
=3B>=3B --
>=3B >=3B>=3B Sent via pgsql-admin mailing list (pgsq=
l-admin@postgresql.org)
>=3B >=3B>=3B To make changes to your subs=
cription:
>=3B >=3B>=3B http://www.postgresql.org/mailpref/pgsql-a=
dmin
>=3B >=3B
>=3B >=3B ________________________________
=
>=3B >=3B The New Busy is not the old busy. Search=2C chat and e-mail f=
rom your inbox.
>=3B >=3B Get started.
>=3B
>=3B
>=
=3B
>=3B --
>=3B When fascism comes to America=2C it will be in=
tolerance sold as diversity.
>=3B
>=3B --
>=3B Sent via pg=
sql-admin mailing list (pgsql-admin@postgresql.org)
>=3B To make chang=
es to your subscription:
>=3B http://www.postgresql.org/mailpref/pgsql=
-admin
Hotmail is redefining busy with too=
ls for the New Busy. Get more from your inbox. slive.com/campaign/thenewbusy?ocid=3DPID28326::T:WLMTAGL:ON: WL:en-US:WM_HMP=
:042010_2' target=3D'_new'>See how.
=
--_ae8c14c1-f39e-4047-a7a5-e4e7a599d46b_--
Re: Asynchronous commit | Transaction loss at server crash
am 20.05.2010 22:30:21 von Scott MarloweOn Thu, May 20, 2010 at 2:26 PM, Balkrishna Sharma
> But if we have write-through setting, failure before the cache can write =
to
> disk will result in incomplete transaction (i.e. host will know that the
> transaction was incomplete). Right ?
> Two things I need for my system is:
> 1. Unsuccessful transactions with a notification back that it is
> unsuccessful is ok but telling it is a=A0successful=A0transaction and not=
being
> able to write to database is not acceptable (ever).
> 2. My write time (random access time) should be as minimal as possible.
> Can a SSD with write-thru cache achieve this ?
> Thanks for your inputs.
Not at present. The write cache in an SSD cannot be disabled, because
it has to aggregate a bunch of writes together. So, it reads say
128k, changes x % then writes it back out. During this period, power
loss could result in those writes being lost. However, the SSD will
have reported success already. There are some that supposedly have a
big enough capacitor to complete this write cache, but I have seen no
definitive tests with pgsql that this actually works to keep your data
safe in event of power loss during write.
A battery backed caching RAID controller CAN be depended on, because
they have been tested and shown to do the right thing.
--=20
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Re: Asynchronous commit | Transaction loss at server crash
am 20.05.2010 22:33:48 von Jesper KroghOn 2010-05-20 22:26, Balkrishna Sharma wrote:
> But if we have write-through setting, failure before the cache can write to disk will result in incomplete transaction (i.e. host will know that the transaction was incomplete). Right
>
> Two things I need for my system is:1. Unsuccessful transactions with a notification back that it is unsuccessful is ok but telling it is a successful transaction and not being able to write to database is not acceptable (ever).2. My write time (random access time) should be as minimal as possible.
> Can a SSD with write-thru cache achieve this
>
A Battery Backed raid controller is not that expensive. (in the range of
1 or 2 SSD disks).
And it is (more or less) a silverbullet to the task you describe.
SSD "might" solve the problem, but comes with a huge range of unknowns
at the moment.
* Wear over time.
* Degraded performance in write-through mode.
* Degrading peformance over time.
* Writeback mode not robust to power-failures.
Plugging your system (SSD's) with an UPS and trusting it fully
could solve most of the problems (running in writeback mode).
But compared in complexity, I would say that the Battery backed
raid controller is way more easy to get right.
.... if you had a huge dataset you were doing random reads into and
couldn't beef your system with more memory(cheapy) SSD's might
be a good solution for that.
--
Jesper
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Re: Asynchronous commit | Transaction loss at server crash
am 21.05.2010 00:04:22 von Greg SmithJesper Krogh wrote:
> A Battery Backed raid controller is not that expensive. (in the range
> of 1 or 2 SSD disks).
> And it is (more or less) a silverbullet to the task you describe.
Maybe even less; in order to get a SSD that's reliable at all in terms
of good crash recovery, you have buy a fairly expensive one. Also, and
this is really important, you really don't want to deploy onto a single
SSD and put critical system files there. Their failure rates are not
that low. You need to put them into a RAID-1 setup and budget for two
of them, which brings you right back to
Also, it's questionable whether a SSD is even going to be faster than
standard disks for the sequential WAL writes anyway, once a non-volatile
write cache is available. Sequential writes to SSD are the area where
the gap in performance between them and spinning disks is the smallest.
> Plugging your system (SSD's) with an UPS and trusting it fully
> could solve most of the problems (running in writeback mode).
UPS batteries fail, and people accidentally knock out over server power
cords. It's a pretty bad server that can't survive someone tripping
over the cord while it's busy, and that's the situation the "use a UPS"
idea doesn't improve.
--
Greg Smith 2ndQuadrant US Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com www.2ndQuadrant.us
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Re: Asynchronous commit | Transaction loss at server crash
am 21.05.2010 00:12:29 von Rosser SchwarzOn Thu, May 20, 2010 at 4:04 PM, Greg Smith
> Also, it's questionable whether a SSD is even going to be faster than
> standard disks for the sequential WAL writes anyway, once a non-volatile
> write cache is available. =A0Sequential writes to SSD are the area where =
the
> gap in performance between them and spinning disks is the smallest.
Yeah, at this point, the only place I'd consider using an SSD in
production is as a tablespace for indexes. Their win is huge for
random IO, and indexes can always be rebuilt. Data, not so much.
Transaction logs, even less.
rls
--=20
:wq
--=20
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Re: Asynchronous commit | Transaction loss at server crash
am 21.05.2010 00:19:46 von Greg SmithBalkrishna Sharma wrote:
> I need to support several hundreds of concurrent update/inserts from
> an online form with pretty low latency (maybe couple of milliseconds
> at max). Think of a save to database at every 'tab-out' in an online form.
I regularly see 2000 - 4000 small write transactions per second on
systems with a battery-backed write cache and a moderate disk array
attached. 2000 TPS = 0.5 ms, on average. Note however that it's
extremely difficult to bound the worst-case behavior possible here
anywhere near that tight. Under a benchmark load I can normally get
even an extremely tuned Linux configuration to occasionally pause for
1-3 seconds at commit time, when the OS write cache is full, a
checkpoint is finishing, and the client doing the commit is stuck
waiting for that. They're rare but you should expect to see that
situation sometimes.
We know basically what causes that and how to make it less likely to
happen in a real application. But the possibility is still there, and
if your design cannot tolerate an occasional latency uptick you may be
disappointed because that's very, very hard to guarantee with the
workload you're expecting here. There are plenty of ideas for how to
tune in that direction both at the source code level and by carefully
selecting the OS/filesystem combination used, but that's not a very well
explored territory. The checkpoint design in the database has known
weaknesses in this particular area, and they're impossible to solve just
by throwing hardware at the problem.
--
Greg Smith 2ndQuadrant US Baltimore, MD
PostgreSQL Training, Services and Support
greg@2ndQuadrant.com www.2ndQuadrant.us
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Re: Asynchronous commit | Transaction loss at server crash
am 21.05.2010 06:49:52 von Jesper KroghThis is a multi-part message in MIME format.
--------------070404030201050108050805
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
On 2010-05-21 00:04, Greg Smith wrote:
> Jesper Krogh wrote:
> > A Battery Backed raid controller is not that expensive. (in the
> > range of 1 or 2 SSD disks). And it is (more or less) a silverbullet
> > to the task you describe.
>
> Maybe even less; in order to get a SSD that's reliable at all in
> terms of good crash recovery, you have buy a fairly expensive one.
> Also, and this is really important, you really don't want to deploy
> onto a single SSD and put critical system files there. Their failure
> rates are not that low. You need to put them into a RAID-1 setup and
> budget for two of them, which brings you right back to
I'm currently buillding a HP D2700 box with 25 x X25-M SSD, I have added
a LSI 8888ELP raid-controller with 256MB BBWC and 2 seperate ups'es
for the 2 independent PSU's on the D2700 (in the pricing numbers that
wasn't a huge part of it).
It has to do with the application. It consists of around 1TB of data, that
is accessed fairly rarely and on more or less random basis. A webapplication
is connected that tries to deliver say 200 random rows from a main table
and for each of them traversing to connected tables for information, so
an individual page can easily add up to 1000+ random reads (just
for confirming row information).
What we have done so far is to add a quite big amount of code that tries to
collapes the datastructure and cache each row for the view, so the 1000+
gets down in the order of 200, but it raises the complexity of the
applications
which isn't a good thing either.
I still havent got the application onto it, and 12 months of production
usage on
top, but so far I'm really looking forward to seeing it, because in this
applications it seems like a very good fit.
And about the disk-wear, as long as they dont blow up all at the same time
then I dont mind having to change a disk every now and then, so it'll
be really interesting to see if the 20GB/disk/day (the X25-M is speced for)
is going to be something that really matters in my hands.
I plan on putting the xlog and wal-archive on a fibre-channel slice, so they
essentially dont count into above numbers.
I dont know if bonnie is accurate in that range but the last run delivered
over 500K random 4KB read /s, and it saturated the 2 x 3gpbs SAS links
out of the controller in seq-read/seq-writes.
On like up to 10 runs..
> Also, it's questionable whether a SSD is even going to be faster than
> standard disks for the sequential WAL writes anyway, once a
> non-volatile write cache is available. Sequential writes to SSD are
> the area where the gap in performance between them and spinning disks
> is the smallest.
They are not in a totally other ballpark than spinning disks, but they
requires much less "intellegent logic" in the OS/filesystem for read-ahead
and block io and elevator ..
> > Plugging your system (SSD's) with an UPS and trusting it fully
> > could solve most of the problems (running in writeback mode).
>
> UPS batteries fail, and people accidentally knock out over server
> power cords. It's a pretty bad server that can't survive someone
> tripping over the cord while it's busy, and that's the situation the
> "use a UPS" idea doesn't improve.
Mounted in a rack with "a lot" of cable binders. Keeping in mind that
it should only have the power for a few ms before the voliatile cache is
flushed.
But I totally agree with you, it is a matter of what applications you're
building on top.
.... and we do backup to tape every night, so the "worst case" is not that
the system blows up. It is more:
* The system ends up not performining any better due to "something
unknown".
or
* The systems end up taking way to much work on the system administrative
side in changning worn disks and rebuilding arrays and such.
This is not a type if system where a "single lost transaction" is of any
matters,
more in the analytics/data-mining category, where last weeks backup is
more or less equally good as todays.
Jesper
--
Jesper
--------------070404030201050108050805
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
http-equiv="Content-Type">
On 2010-05-21 00:04, Greg Smith wrote:
> Jesper Krogh wrote:
>> A Battery Backed raid controller is not that expensive. (in the
>> range of 1 or 2 SSD disks). And it is (more or less) a
silverbullet
>> to the task you describe.
>
> Maybe even less; in order to get a SSD that's reliable at all in
> terms of good crash recovery, you have buy a fairly expensive one.
> Also, and this is really important, you really don't want to deploy
> onto a single SSD and put critical system files there. Their
failure
> rates are not that low. You need to put them into a RAID-1 setup
and
> budget for two of them, which brings you right back to
I'm currently buillding a HP D2700 box with 25 x X25-M SSD, I
have added
a LSI 8888ELP raid-controller with 256MB BBWC and 2 seperate ups'es
for the 2 independent PSU's on the D2700 (in the pricing numbers that
wasn't a huge part of it).
It has to do with the application. It consists of around 1TB of data,
that
is accessed fairly rarely and on more or less random basis. A
webapplication
is connected that tries to deliver say 200 random rows from a main table
and for each of them traversing to connected tables for information, so
an individual page can easily add up to 1000+ random reads (just
for confirming row information).
What we have done so far is to add a quite big amount of code that
tries to
collapes the datastructure and cache each row for the view, so the
1000+
gets down in the order of 200, but it raises the complexity of the
applications
which isn't a good thing either.
I still havent got the application onto it, and 12 months of production
usage on
top, but so far I'm really looking forward to seeing it, because in
this
applications it seems like a very good fit.
And about the disk-wear, as long as they dont blow up all at the same
time
then I dont mind having to change a disk every now and then, so it'll
be really interesting to see if the 20GB/disk/day (the X25-M is speced
for)
is going to be something that really matters in my hands.
I plan on putting the xlog and wal-archive on a fibre-channel slice, so
they
essentially dont count into above numbers.
I dont know if bonnie is accurate in that range but the last run
delivered
over 500K random 4KB read /s, and it saturated the 2 x 3gpbs SAS links
out of the controller in seq-read/seq-writes.
On like up to 10 runs..
> Also, it's questionable whether a
SSD is even going to be faster than
> standard disks for the sequential WAL writes anyway, once a
> non-volatile write cache is available. Sequential writes to SSD
are
> the area where the gap in performance between them and spinning
disks
> is the smallest.
They are not in a totally other ballpark than spinning disks, but they
requires much less "intellegent logic" in the OS/filesystem for
read-ahead
and block io and elevator ..
>> Plugging your system (SSD's) with an UPS and trusting it fully
>> could solve most of the problems (running in writeback mode).
>
> UPS batteries fail, and people accidentally knock out over server
> power cords. It's a pretty bad server that can't survive someone
> tripping over the cord while it's busy, and that's the situation
the
> "use a UPS" idea doesn't improve.
Mounted in a rack with "a lot" of cable binders. Keeping in mind that
it should only have the power for a few ms before the voliatile cache
is
flushed.
But I totally agree with you, it is a matter of what applications
you're
building on top.
.... and we do backup to tape every night, so the "worst case" is not
that
the system blows up. It is more:
* The system ends up not performining any better due to "something
unknown".
or
* The systems end up taking way to much work on the system
administrative
side in changning worn disks and rebuilding arrays and such.
This is not a type if system where a "single lost transaction" is of
any matters,
more in the analytics/data-mining category, where last weeks backup is
more or less equally good as todays.
Jesper
--
Jesper
--------------070404030201050108050805--