postgres invoked oom-killer
postgres invoked oom-killer
am 07.05.2010 16:26:51 von Silvio Brandani
We have a postgres 8.3.8 on linux
We get following messages int /var/log/messages:
May 6 22:31:01 pgblade02 kernel: postgres invoked oom-killer:
gfp_mask=0x201d2, order=0, oomkilladj=0
May 6 22:31:01 pgblade02 kernel:
May 6 22:31:01 pgblade02 kernel: Call Trace:
May 6 22:31:19 pgblade02 kernel: []
out_of_memory+0x8e/0x2f5
May 6 22:31:19 pgblade02 kernel: []
__alloc_pages+0x22b/0x2b4
May 6 22:31:19 pgblade02 kernel: []
__do_page_cache_readahead+0x95/0x1d9
May 6 22:31:19 pgblade02 kernel: []
__wait_on_bit_lock+0x5b/0x66
May 6 22:31:19 pgblade02 kernel: []
:dm_mod:dm_any_congested+0x38/0x3f
May 6 22:31:19 pgblade02 kernel: []
filemap_nopage+0x148/0x322
May 6 22:31:19 pgblade02 kernel: []
__handle_mm_fault+0x1f8/0xdf4
May 6 22:31:19 pgblade02 kernel: []
do_page_fault+0x4b8/0x81d
May 6 22:31:19 pgblade02 kernel: []
thread_return+0x0/0xeb
May 6 22:31:19 pgblade02 kernel: [] error_exit+0x0/0x84
May 6 22:31:27 pgblade02 kernel:
May 6 22:31:28 pgblade02 kernel: Mem-info:
May 6 22:31:28 pgblade02 kernel: Node 0 DMA per-cpu:
May 6 22:31:28 pgblade02 kernel: cpu 0 hot: high 0, batch 1 used:0
May 6 22:31:28 pgblade02 kernel: cpu 0 cold: high 0, batch 1 used:0
May 6 22:31:28 pgblade02 kernel: cpu 1 hot: high 0, batch 1 used:0
May 6 22:31:28 pgblade02 kernel: cpu 1 cold: high 0, batch 1 used:0
May 6 22:31:28 pgblade02 kernel: cpu 2 hot: high 0, batch 1 used:0
May 6 22:31:28 pgblade02 kernel: cpu 2 cold: high 0, batch 1 used:0
May 6 22:31:28 pgblade02 kernel: cpu 3 hot: high 0, batch 1 used:0
May 6 22:31:28 pgblade02 kernel: cpu 3 cold: high 0, batch 1 used:0
May 6 22:31:28 pgblade02 kernel: Node 0 DMA32 per-cpu:
May 6 22:31:28 pgblade02 kernel: cpu 0 hot: high 186, batch 31 used:27
May 6 22:31:29 pgblade02 kernel: cpu 0 cold: high 62, batch 15 used:54
May 6 22:31:29 pgblade02 kernel: cpu 1 hot: high 186, batch 31 used:23
May 6 22:31:29 pgblade02 kernel: cpu 1 cold: high 62, batch 15 used:49
May 6 22:31:29 pgblade02 kernel: cpu 2 hot: high 186, batch 31 used:12
May 6 22:31:29 pgblade02 kernel: cpu 2 cold: high 62, batch 15 used:14
May 6 22:31:29 pgblade02 kernel: cpu 3 hot: high 186, batch 31 used:50
May 6 22:31:29 pgblade02 kernel: cpu 3 cold: high 62, batch 15 used:60
May 6 22:31:29 pgblade02 kernel: Node 0 Normal per-cpu:
May 6 22:31:29 pgblade02 kernel: cpu 0 hot: high 186, batch 31 used:5
May 6 22:31:29 pgblade02 kernel: cpu 0 cold: high 62, batch 15 used:48
May 6 22:31:29 pgblade02 kernel: cpu 1 hot: high 186, batch 31 used:11
May 6 22:31:29 pgblade02 kernel: cpu 1 cold: high 62, batch 15 used:39
May 6 22:31:29 pgblade02 kernel: cpu 2 hot: high 186, batch 31 used:14
May 6 22:31:29 pgblade02 kernel: cpu 2 cold: high 62, batch 15 used:57
May 6 22:31:29 pgblade02 kernel: cpu 3 hot: high 186, batch 31 used:94
May 6 22:31:29 pgblade02 kernel: cpu 3 cold: high 62, batch 15 used:36
May 6 22:31:29 pgblade02 kernel: Node 0 HighMem per-cpu: empty
May 6 22:31:29 pgblade02 kernel: Free pages: 41788kB (0kB HighMem)
May 6 22:31:29 pgblade02 kernel: Active:974250 inactive:920579 dirty:0
writeback:0 unstable:0 free:10447 slab:11470 mapped-file:985
mapped-anon:1848625 pagetables:111027
May 6 22:31:29 pgblade02 kernel: Node 0 DMA free:11172kB min:12kB
low:12kB high:16kB active:0kB inactive:0kB present:10816kB
pages_scanned:0 all_unreclaimable? yes
May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 3254 8052 8052
May 6 22:31:29 pgblade02 kernel: Node 0 DMA32 free:23804kB min:4636kB
low:5792kB high:6952kB active:1555260kB inactive:1566144kB
present:3332668kB pages_scanned:35703257 all_unreclaimable? yes
May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 0 4797 4797
May 6 22:31:29 pgblade02 kernel: Node 0 Normal free:6812kB min:6836kB
low:8544kB high:10252kB active:2342332kB inactive:2115836kB
present:4912640kB pages_scanned:10165709 all_unreclaimable? yes
May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 0 0 0
May 6 22:31:29 pgblade02 kernel: Node 0 HighMem free:0kB min:128kB
low:128kB high:128kB active:0kB inactive:0kB present:0kB pages_scanned:0
all_unreclaimable? no
May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 0 0 0
May 6 22:31:29 pgblade02 kernel: Node 0 DMA: 3*4kB 5*8kB 3*16kB 6*32kB
4*64kB 3*128kB 0*256kB 0*512kB 2*1024kB 0*2048kB 2*4096kB = 11172kB
May 6 22:31:29 pgblade02 kernel: Node 0 DMA32: 27*4kB 0*8kB 1*16kB
0*32kB 2*64kB 4*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 5*4096kB = 23804kB
May 6 22:31:29 pgblade02 kernel: Node 0 Normal: 21*4kB 9*8kB 26*16kB
3*32kB 6*64kB 5*128kB 0*256kB 0*512kB 1*1024kB 0*2048kB 1*4096kB = 6812kB
May 6 22:31:29 pgblade02 kernel: Node 0 HighMem: empty
May 6 22:31:29 pgblade02 kernel: Swap cache: add 71286821, delete
71287152, find 207780333/216904318, race 1387+10506
May 6 22:31:29 pgblade02 kernel: Free swap = 0kB
May 6 22:31:30 pgblade02 kernel: Total swap = 8388600kB
May 6 22:31:30 pgblade02 kernel: Free swap: 0kB
May 6 22:31:30 pgblade02 kernel: 2293759 pages of RAM
May 6 22:31:30 pgblade02 kernel: 249523 reserved pages
May 6 22:31:30 pgblade02 kernel: 56111 pages shared
May 6 22:31:30 pgblade02 kernel: 260 pages swap cached
May 6 22:31:30 pgblade02 kernel: Out of memory: Killed process 29076
(postgres).
We get folloowing errors in the postgres log:
A couple of time:
2010-05-06 22:26:28 CEST [23001]: [2-1] WARNING: worker took too long
to start; cancelled
Then:
2010-05-06 22:31:21 CEST [29059]: [27-1] LOG: system logger process
(PID 29076) was terminated by signal 9: Killed
Finally:
2010-05-06 22:50:20 CEST [29059]: [28-1] LOG: background writer process
(PID 22999) was terminated by signal 9: Killed
2010-05-06 22:50:20 CEST [29059]: [29-1] LOG: terminating any other
active server processes
Any help higly apprecaited,
---
Utilizziamo i dati personali che la riguardano esclusivamente per nostre finalità amministrative e contabili, anche quando li comunichiamo a terzi. Informazioni dettagliate, anche in ordine al Suo diritto di accesso e agli altri Suoi diritti, sono riportate alla pagina http://www.savinodelbene.com/news/privacy.html
Se avete ricevuto questo messaggio per errore Vi preghiamo di ritornarlo al mittente eliminandolo assieme agli eventuali allegati, ai sensi art. 616 codice penale http://www.savinodelbene.com/codice_penale_616.html
L'Azienda non si assume alcuna responsabilità giuridica qualora pervengano da questo indirizzo messaggi estranei all'attività lavorativa o contrari a norme.
--
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Re: postgres invoked oom-killer
am 07.05.2010 17:15:14 von Lacey Powers
Silvio Brandani wrote:
> We have a postgres 8.3.8 on linux
>
> We get following messages int /var/log/messages:
>
> May 6 22:31:01 pgblade02 kernel: postgres invoked oom-killer:=20
> gfp_mask=3D0x201d2, order=3D0, oomkilladj=3D0
> May 6 22:31:01 pgblade02 kernel:
> May 6 22:31:01 pgblade02 kernel: Call Trace:
> May 6 22:31:19 pgblade02 kernel: []=20
> out_of_memory+0x8e/0x2f5
> May 6 22:31:19 pgblade02 kernel: []=20
> __alloc_pages+0x22b/0x2b4
> May 6 22:31:19 pgblade02 kernel: []=20
> __do_page_cache_readahead+0x95/0x1d9
> May 6 22:31:19 pgblade02 kernel: []=20
> __wait_on_bit_lock+0x5b/0x66
> May 6 22:31:19 pgblade02 kernel: []=20
> :dm_mod:dm_any_congested+0x38/0x3f
> May 6 22:31:19 pgblade02 kernel: []=20
> filemap_nopage+0x148/0x322
> May 6 22:31:19 pgblade02 kernel: []=20
> __handle_mm_fault+0x1f8/0xdf4
> May 6 22:31:19 pgblade02 kernel: []=20
> do_page_fault+0x4b8/0x81d
> May 6 22:31:19 pgblade02 kernel: []=20
> thread_return+0x0/0xeb
> May 6 22:31:19 pgblade02 kernel: []=20
> error_exit+0x0/0x84
> May 6 22:31:27 pgblade02 kernel:
> May 6 22:31:28 pgblade02 kernel: Mem-info:
> May 6 22:31:28 pgblade02 kernel: Node 0 DMA per-cpu:
> May 6 22:31:28 pgblade02 kernel: cpu 0 hot: high 0, batch 1 used:0
> May 6 22:31:28 pgblade02 kernel: cpu 0 cold: high 0, batch 1 used:0
> May 6 22:31:28 pgblade02 kernel: cpu 1 hot: high 0, batch 1 used:0
> May 6 22:31:28 pgblade02 kernel: cpu 1 cold: high 0, batch 1 used:0
> May 6 22:31:28 pgblade02 kernel: cpu 2 hot: high 0, batch 1 used:0
> May 6 22:31:28 pgblade02 kernel: cpu 2 cold: high 0, batch 1 used:0
> May 6 22:31:28 pgblade02 kernel: cpu 3 hot: high 0, batch 1 used:0
> May 6 22:31:28 pgblade02 kernel: cpu 3 cold: high 0, batch 1 used:0
> May 6 22:31:28 pgblade02 kernel: Node 0 DMA32 per-cpu:
> May 6 22:31:28 pgblade02 kernel: cpu 0 hot: high 186, batch 31 used:27
> May 6 22:31:29 pgblade02 kernel: cpu 0 cold: high 62, batch 15 used:54
> May 6 22:31:29 pgblade02 kernel: cpu 1 hot: high 186, batch 31 used:23
> May 6 22:31:29 pgblade02 kernel: cpu 1 cold: high 62, batch 15 used:49
> May 6 22:31:29 pgblade02 kernel: cpu 2 hot: high 186, batch 31 used:12
> May 6 22:31:29 pgblade02 kernel: cpu 2 cold: high 62, batch 15 used:14
> May 6 22:31:29 pgblade02 kernel: cpu 3 hot: high 186, batch 31 used:50
> May 6 22:31:29 pgblade02 kernel: cpu 3 cold: high 62, batch 15 used:60
> May 6 22:31:29 pgblade02 kernel: Node 0 Normal per-cpu:
> May 6 22:31:29 pgblade02 kernel: cpu 0 hot: high 186, batch 31 used:5
> May 6 22:31:29 pgblade02 kernel: cpu 0 cold: high 62, batch 15 used:48
> May 6 22:31:29 pgblade02 kernel: cpu 1 hot: high 186, batch 31 used:11
> May 6 22:31:29 pgblade02 kernel: cpu 1 cold: high 62, batch 15 used:39
> May 6 22:31:29 pgblade02 kernel: cpu 2 hot: high 186, batch 31 used:14
> May 6 22:31:29 pgblade02 kernel: cpu 2 cold: high 62, batch 15 used:57
> May 6 22:31:29 pgblade02 kernel: cpu 3 hot: high 186, batch 31 used:94
> May 6 22:31:29 pgblade02 kernel: cpu 3 cold: high 62, batch 15 used:36
> May 6 22:31:29 pgblade02 kernel: Node 0 HighMem per-cpu: empty
> May 6 22:31:29 pgblade02 kernel: Free pages: 41788kB (0kB HighMe=
m)
> May 6 22:31:29 pgblade02 kernel: Active:974250 inactive:920579=20
> dirty:0 writeback:0 unstable:0 free:10447 slab:11470 mapped-file:985=20
> mapped-anon:1848625 pagetables:111027
> May 6 22:31:29 pgblade02 kernel: Node 0 DMA free:11172kB min:12kB=20
> low:12kB high:16kB active:0kB inactive:0kB present:10816kB=20
> pages_scanned:0 all_unreclaimable? yes
> May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 3254 8052 8052
> May 6 22:31:29 pgblade02 kernel: Node 0 DMA32 free:23804kB min:4636kB=20
> low:5792kB high:6952kB active:1555260kB inactive:1566144kB=20
> present:3332668kB pages_scanned:35703257 all_unreclaimable? yes
> May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 0 4797 4797
> May 6 22:31:29 pgblade02 kernel: Node 0 Normal free:6812kB min:6836kB=20
> low:8544kB high:10252kB active:2342332kB inactive:2115836kB=20
> present:4912640kB pages_scanned:10165709 all_unreclaimable? yes
> May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 0 0 0
> May 6 22:31:29 pgblade02 kernel: Node 0 HighMem free:0kB min:128kB=20
> low:128kB high:128kB active:0kB inactive:0kB present:0kB=20
> pages_scanned:0 all_unreclaimable? no
> May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 0 0 0
> May 6 22:31:29 pgblade02 kernel: Node 0 DMA: 3*4kB 5*8kB 3*16kB=20
> 6*32kB 4*64kB 3*128kB 0*256kB 0*512kB 2*1024kB 0*2048kB 2*4096kB =
> 11172kB
> May 6 22:31:29 pgblade02 kernel: Node 0 DMA32: 27*4kB 0*8kB 1*16kB=20
> 0*32kB 2*64kB 4*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 5*4096kB =
> 23804kB
> May 6 22:31:29 pgblade02 ker
> if it asks for more memory than is actually available.
> nel: Node 0 Normal: 21*4kB 9*8kB 26*16kB 3*32kB 6*64kB 5*128kB 0*256kB=20
> 0*512kB 1*1024kB 0*2048kB 1*4096kB =3D 6812kB
> May 6 22:31:29 pgblade02 kernel: Node 0 HighMem: empty
> May 6 22:31:29 pgblade02 kernel: Swap cache: add 71286821, delete=20
> 71287152, find 207780333/216904318, race 1387+10506
> May 6 22:31:29 pgblade02 kernel: Free swap =3D 0kB
> May 6 22:31:30 pgblade02 kernel: Total swap =3D 8388600kB
> May 6 22:31:30 pgblade02 kernel: Free swap: 0kB
> May 6 22:31:30 pgblade02 kernel: 2293759 pages of RAM
> May 6 22:31:30 pgblade02 kernel: 249523 reserved pages
> May 6 22:31:30 pgblade02 kernel: 56111 pages shared
> May 6 22:31:30 pgblade02 kernel: 260 pages swap cached
> May 6 22:31:30 pgblade02 kernel: Out of memory: Killed process 29076=20
> (postgres).
>
>
> We get folloowing errors in the postgres log:
>
> A couple of time:
> 2010-05-06 22:26:28 CEST [23001]: [2-1] WARNING: worker took too long=20
> to start; cancelled
> Then:
> 2010-05-06 22:31:21 CEST [29059]: [27-1] LOG: system logger process=20
> (PID 29076) was terminated by signal 9: Killed
> Finally:
> 2010-05-06 22:50:20 CEST [29059]: [28-1] LOG: background writer=20
> process (PID 22999) was terminated by signal 9: Killed
> 2010-05-06 22:50:20 CEST [29059]: [29-1] LOG: terminating any other=20
> active server processes
>
> Any help higly apprecaited,
>
> ---
>
>
>
>
>
>
> Utilizziamo i dati personali che la riguardano esclusivamente per=20
> nostre finalit=E0 amministrative e contabili, anche quando li=20
> comunichiamo a terzi. Informazioni dettagliate, anche in ordine al Suo=20
> diritto di accesso e agli altri Suoi diritti, sono riportate alla=20
> pagina http://www.savinodelbene.com/news/privacy.html
> Se avete ricevuto questo messaggio per errore Vi preghiamo di=20
> ritornarlo al mittente eliminandolo assieme agli eventuali allegati,=20
> ai sensi art. 616 codice penale=20
> http://www.savinodelbene.com/codice_penale_616.html
> L'Azienda non si assume alcuna responsabilit=E0 giuridica qualora=20
> pervengano da questo indirizzo messaggi estranei all'attività
> lavorativa o contrari a norme.
> --=20
>
Hello Silvio,
Is this machine dedicated to PostgreSQL?
If so, I'd recommend adding these two parameters to your sysctl.conf
vm.overcommit_memory =3D 2
vm.overcommit_ratio =3D 0
So that OOMKiller is turned off.
PostgreSQL should gracefully degrade if a malloc() fails because it asks =
for too much memory.
Hope that helps. =3D)
Regards,=20
Lacey
--=20
Lacey Powers
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 ext 104
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
--=20
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Re: postgres invoked oom-killer
am 07.05.2010 17:35:54 von Greg Spiegelberg
--0016363b8f00965be2048602d019
Content-Type: text/plain; charset=ISO-8859-1
On Fri, May 7, 2010 at 8:26 AM, Silvio Brandani
> wrote:
> We have a postgres 8.3.8 on linux
>
> We get following messages int /var/log/messages:
>
> May 6 22:31:01 pgblade02 kernel: postgres invoked oom-killer:
> gfp_mask=0x201d2, order=0, oomkilladj=0
>
*** snip ***
> May 6 22:31:30 pgblade02 kernel: Out of memory: Killed process 29076
> (postgres).
>
Silvio,
Is this system a virtual machine?
Greg
--0016363b8f00965be2048602d019
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
On Fri, May 7, 2010 at 8:26 AM, Silvio Brandani =
<silvio=
..brandani@tech.sdb.it> wrote:
uote" style=3D"border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0=
pt 0.8ex; padding-left: 1ex;">
We have a postgres 8.3.8 on linux
We get following messages int /var/log/messages:
May =A06 22:31:01 pgblade02 kernel: postgres invoked oom-killer: gfp_mask=
=3D0x201d2, order=3D0, oomkilladj=3D0
*** snip ***
>
(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
May =A06 22:31:30 pgblade02 kernel: Out of memory: Killed process 29076 (po=
stgres).
Silvio,
Is this system a virtual machine?
Greg
--0016363b8f00965be2048602d019--
Re: postgres invoked oom-killer
am 07.05.2010 17:42:38 von Silvio Brandani
Lacey Powers ha scritto:
> Silvio Brandani wrote:
>> We have a postgres 8.3.8 on linux
>>
>> We get following messages int /var/log/messages:
>>
>> May 6 22:31:01 pgblade02 kernel: postgres invoked oom-killer:
>> gfp_mask=0x201d2, order=0, oomkilladj=0
>> May 6 22:31:01 pgblade02 kernel:
>> May 6 22:31:01 pgblade02 kernel: Call Trace:
>> May 6 22:31:19 pgblade02 kernel: []
>> out_of_memory+0x8e/0x2f5
>> May 6 22:31:19 pgblade02 kernel: []
>> __alloc_pages+0x22b/0x2b4
>> May 6 22:31:19 pgblade02 kernel: []
>> __do_page_cache_readahead+0x95/0x1d9
>> May 6 22:31:19 pgblade02 kernel: []
>> __wait_on_bit_lock+0x5b/0x66
>> May 6 22:31:19 pgblade02 kernel: []
>> :dm_mod:dm_any_congested+0x38/0x3f
>> May 6 22:31:19 pgblade02 kernel: []
>> filemap_nopage+0x148/0x322
>> May 6 22:31:19 pgblade02 kernel: []
>> __handle_mm_fault+0x1f8/0xdf4
>> May 6 22:31:19 pgblade02 kernel: []
>> do_page_fault+0x4b8/0x81d
>> May 6 22:31:19 pgblade02 kernel: []
>> thread_return+0x0/0xeb
>> May 6 22:31:19 pgblade02 kernel: []
>> error_exit+0x0/0x84
>> May 6 22:31:27 pgblade02 kernel:
>> May 6 22:31:28 pgblade02 kernel: Mem-info:
>> May 6 22:31:28 pgblade02 kernel: Node 0 DMA per-cpu:
>> May 6 22:31:28 pgblade02 kernel: cpu 0 hot: high 0, batch 1 used:0
>> May 6 22:31:28 pgblade02 kernel: cpu 0 cold: high 0, batch 1 used:0
>> May 6 22:31:28 pgblade02 kernel: cpu 1 hot: high 0, batch 1 used:0
>> May 6 22:31:28 pgblade02 kernel: cpu 1 cold: high 0, batch 1 used:0
>> May 6 22:31:28 pgblade02 kernel: cpu 2 hot: high 0, batch 1 used:0
>> May 6 22:31:28 pgblade02 kernel: cpu 2 cold: high 0, batch 1 used:0
>> May 6 22:31:28 pgblade02 kernel: cpu 3 hot: high 0, batch 1 used:0
>> May 6 22:31:28 pgblade02 kernel: cpu 3 cold: high 0, batch 1 used:0
>> May 6 22:31:28 pgblade02 kernel: Node 0 DMA32 per-cpu:
>> May 6 22:31:28 pgblade02 kernel: cpu 0 hot: high 186, batch 31 used:27
>> May 6 22:31:29 pgblade02 kernel: cpu 0 cold: high 62, batch 15 used:54
>> May 6 22:31:29 pgblade02 kernel: cpu 1 hot: high 186, batch 31 used:23
>> May 6 22:31:29 pgblade02 kernel: cpu 1 cold: high 62, batch 15 used:49
>> May 6 22:31:29 pgblade02 kernel: cpu 2 hot: high 186, batch 31 used:12
>> May 6 22:31:29 pgblade02 kernel: cpu 2 cold: high 62, batch 15 used:14
>> May 6 22:31:29 pgblade02 kernel: cpu 3 hot: high 186, batch 31 used:50
>> May 6 22:31:29 pgblade02 kernel: cpu 3 cold: high 62, batch 15 used:60
>> May 6 22:31:29 pgblade02 kernel: Node 0 Normal per-cpu:
>> May 6 22:31:29 pgblade02 kernel: cpu 0 hot: high 186, batch 31 used:5
>> May 6 22:31:29 pgblade02 kernel: cpu 0 cold: high 62, batch 15 used:48
>> May 6 22:31:29 pgblade02 kernel: cpu 1 hot: high 186, batch 31 used:11
>> May 6 22:31:29 pgblade02 kernel: cpu 1 cold: high 62, batch 15 used:39
>> May 6 22:31:29 pgblade02 kernel: cpu 2 hot: high 186, batch 31 used:14
>> May 6 22:31:29 pgblade02 kernel: cpu 2 cold: high 62, batch 15 used:57
>> May 6 22:31:29 pgblade02 kernel: cpu 3 hot: high 186, batch 31 used:94
>> May 6 22:31:29 pgblade02 kernel: cpu 3 cold: high 62, batch 15 used:36
>> May 6 22:31:29 pgblade02 kernel: Node 0 HighMem per-cpu: empty
>> May 6 22:31:29 pgblade02 kernel: Free pages: 41788kB (0kB
>> HighMem)
>> May 6 22:31:29 pgblade02 kernel: Active:974250 inactive:920579
>> dirty:0 writeback:0 unstable:0 free:10447 slab:11470 mapped-file:985
>> mapped-anon:1848625 pagetables:111027
>> May 6 22:31:29 pgblade02 kernel: Node 0 DMA free:11172kB min:12kB
>> low:12kB high:16kB active:0kB inactive:0kB present:10816kB
>> pages_scanned:0 all_unreclaimable? yes
>> May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 3254 8052 8052
>> May 6 22:31:29 pgblade02 kernel: Node 0 DMA32 free:23804kB
>> min:4636kB low:5792kB high:6952kB active:1555260kB inactive:1566144kB
>> present:3332668kB pages_scanned:35703257 all_unreclaimable? yes
>> May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 0 4797 4797
>> May 6 22:31:29 pgblade02 kernel: Node 0 Normal free:6812kB
>> min:6836kB low:8544kB high:10252kB active:2342332kB
>> inactive:2115836kB present:4912640kB pages_scanned:10165709
>> all_unreclaimable? yes
>> May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 0 0 0
>> May 6 22:31:29 pgblade02 kernel: Node 0 HighMem free:0kB min:128kB
>> low:128kB high:128kB active:0kB inactive:0kB present:0kB
>> pages_scanned:0 all_unreclaimable? no
>> May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 0 0 0
>> May 6 22:31:29 pgblade02 kernel: Node 0 DMA: 3*4kB 5*8kB 3*16kB
>> 6*32kB 4*64kB 3*128kB 0*256kB 0*512kB 2*1024kB 0*2048kB 2*4096kB =
>> 11172kB
>> May 6 22:31:29 pgblade02 kernel: Node 0 DMA32: 27*4kB 0*8kB 1*16kB
>> 0*32kB 2*64kB 4*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 5*4096kB =
>> 23804kB
>> May 6 22:31:29 pgblade02 ker
>> if it asks for more memory than is actually available.
>> nel: Node 0 Normal: 21*4kB 9*8kB 26*16kB 3*32kB 6*64kB 5*128kB
>> 0*256kB 0*512kB 1*1024kB 0*2048kB 1*4096kB = 6812kB
>> May 6 22:31:29 pgblade02 kernel: Node 0 HighMem: empty
>> May 6 22:31:29 pgblade02 kernel: Swap cache: add 71286821, delete
>> 71287152, find 207780333/216904318, race 1387+10506
>> May 6 22:31:29 pgblade02 kernel: Free swap = 0kB
>> May 6 22:31:30 pgblade02 kernel: Total swap = 8388600kB
>> May 6 22:31:30 pgblade02 kernel: Free swap: 0kB
>> May 6 22:31:30 pgblade02 kernel: 2293759 pages of RAM
>> May 6 22:31:30 pgblade02 kernel: 249523 reserved pages
>> May 6 22:31:30 pgblade02 kernel: 56111 pages shared
>> May 6 22:31:30 pgblade02 kernel: 260 pages swap cached
>> May 6 22:31:30 pgblade02 kernel: Out of memory: Killed process 29076
>> (postgres).
>>
>>
>> We get folloowing errors in the postgres log:
>>
>> A couple of time:
>> 2010-05-06 22:26:28 CEST [23001]: [2-1] WARNING: worker took too
>> long to start; cancelled
>> Then:
>> 2010-05-06 22:31:21 CEST [29059]: [27-1] LOG: system logger process
>> (PID 29076) was terminated by signal 9: Killed
>> Finally:
>> 2010-05-06 22:50:20 CEST [29059]: [28-1] LOG: background writer
>> process (PID 22999) was terminated by signal 9: Killed
>> 2010-05-06 22:50:20 CEST [29059]: [29-1] LOG: terminating any other
>> active server processes
>>
>> Any help higly apprecaited,
>>
>> ---
>>
>
> Hello Silvio,
>
> Is this machine dedicated to PostgreSQL?
>
> If so, I'd recommend adding these two parameters to your sysctl.conf
>
> vm.overcommit_memory = 2
> vm.overcommit_ratio = 0
>
> So that OOMKiller is turned off.
>
> PostgreSQL should gracefully degrade if a malloc() fails because it
> asks for too much memory.
>
> Hope that helps. =)
>
> Regards,
> Lacey
>
>
Thanks a lot,
yes the server is dedicated to PostgreSQL.
Could be a bug of PostgreSQL the fact that the system went Out of
Memory?? Wath can be the cause of it?
Regards,
Silvio
--
Silvio Brandani
Infrastructure Administrator
SDB Information Technology
Phone: +39.055.3811222
Fax: +39.055.5201119
---
Utilizziamo i dati personali che la riguardano esclusivamente per nostre finalità amministrative e contabili, anche quando li comunichiamo a terzi. Informazioni dettagliate, anche in ordine al Suo diritto di accesso e agli altri Suoi diritti, sono riportate alla pagina http://www.savinodelbene.com/news/privacy.html
Se avete ricevuto questo messaggio per errore Vi preghiamo di ritornarlo al mittente eliminandolo assieme agli eventuali allegati, ai sensi art. 616 codice penale http://www.savinodelbene.com/codice_penale_616.html
L'Azienda non si assume alcuna responsabilità giuridica qualora pervengano da questo indirizzo messaggi estranei all'attività lavorativa o contrari a norme.
--
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Re: postgres invoked oom-killer
am 07.05.2010 17:46:45 von Silvio Brandani
Greg Spiegelberg ha scritto:
> On Fri, May 7, 2010 at 8:26 AM, Silvio Brandani
> > wrote:
>
> We have a postgres 8.3.8 on linux
>
> We get following messages int /var/log/messages:
>
> May 6 22:31:01 pgblade02 kernel: postgres invoked oom-killer:
> gfp_mask=0x201d2, order=0, oomkilladj=0
>
> *** snip ***
>
> May 6 22:31:30 pgblade02 kernel: Out of memory: Killed process
> 29076 (postgres).
>
>
> Silvio,
>
> Is this system a virtual machine?
>
> Greg
>
No is not,
is up and runnig from 4 months , 60 G of data in 9 different databases.
Lately we import a new schema in one of those databases .
Silvio
--
Silvio Brandani
Infrastructure Administrator
SDB Information Technology
Phone: +39.055.3811222
Fax: +39.055.5201119
---
Utilizziamo i dati personali che la riguardano esclusivamente per nostre finalità amministrative e contabili, anche quando li comunichiamo a terzi. Informazioni dettagliate, anche in ordine al Suo diritto di accesso e agli altri Suoi diritti, sono riportate alla pagina http://www.savinodelbene.com/news/privacy.html
Se avete ricevuto questo messaggio per errore Vi preghiamo di ritornarlo al mittente eliminandolo assieme agli eventuali allegati, ai sensi art. 616 codice penale http://www.savinodelbene.com/codice_penale_616.html
L'Azienda non si assume alcuna responsabilità giuridica qualora pervengano da questo indirizzo messaggi estranei all'attività lavorativa o contrari a norme.
--
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Re: postgres invoked oom-killer
am 07.05.2010 17:49:10 von Silvio Brandani
Silvio Brandani ha scritto:
> Lacey Powers ha scritto:
>> Silvio Brandani wrote:
>>> We have a postgres 8.3.8 on linux
>>>
>>> We get following messages int /var/log/messages:
>>>
>>> May 6 22:31:01 pgblade02 kernel: postgres invoked oom-killer:
>>> gfp_mask=0x201d2, order=0, oomkilladj=0
>>> May 6 22:31:01 pgblade02 kernel:
>>> May 6 22:31:01 pgblade02 kernel: Call Trace:
>>> May 6 22:31:19 pgblade02 kernel: []
>>> out_of_memory+0x8e/0x2f5
>>> May 6 22:31:19 pgblade02 kernel: []
>>> __alloc_pages+0x22b/0x2b4
>>> May 6 22:31:19 pgblade02 kernel: []
>>> __do_page_cache_readahead+0x95/0x1d9
>>> May 6 22:31:19 pgblade02 kernel: []
>>> __wait_on_bit_lock+0x5b/0x66
>>> May 6 22:31:19 pgblade02 kernel: []
>>> :dm_mod:dm_any_congested+0x38/0x3f
>>> May 6 22:31:19 pgblade02 kernel: []
>>> filemap_nopage+0x148/0x322
>>> May 6 22:31:19 pgblade02 kernel: []
>>> __handle_mm_fault+0x1f8/0xdf4
>>> May 6 22:31:19 pgblade02 kernel: []
>>> do_page_fault+0x4b8/0x81d
>>> May 6 22:31:19 pgblade02 kernel: []
>>> thread_return+0x0/0xeb
>>> May 6 22:31:19 pgblade02 kernel: []
>>> error_exit+0x0/0x84
>>> May 6 22:31:27 pgblade02 kernel:
>>> May 6 22:31:28 pgblade02 kernel: Mem-info:
>>> May 6 22:31:28 pgblade02 kernel: Node 0 DMA per-cpu:
>>> May 6 22:31:28 pgblade02 kernel: cpu 0 hot: high 0, batch 1 used:0
>>> May 6 22:31:28 pgblade02 kernel: cpu 0 cold: high 0, batch 1 used:0
>>> May 6 22:31:28 pgblade02 kernel: cpu 1 hot: high 0, batch 1 used:0
>>> May 6 22:31:28 pgblade02 kernel: cpu 1 cold: high 0, batch 1 used:0
>>> May 6 22:31:28 pgblade02 kernel: cpu 2 hot: high 0, batch 1 used:0
>>> May 6 22:31:28 pgblade02 kernel: cpu 2 cold: high 0, batch 1 used:0
>>> May 6 22:31:28 pgblade02 kernel: cpu 3 hot: high 0, batch 1 used:0
>>> May 6 22:31:28 pgblade02 kernel: cpu 3 cold: high 0, batch 1 used:0
>>> May 6 22:31:28 pgblade02 kernel: Node 0 DMA32 per-cpu:
>>> May 6 22:31:28 pgblade02 kernel: cpu 0 hot: high 186, batch 31 used:27
>>> May 6 22:31:29 pgblade02 kernel: cpu 0 cold: high 62, batch 15 used:54
>>> May 6 22:31:29 pgblade02 kernel: cpu 1 hot: high 186, batch 31 used:23
>>> May 6 22:31:29 pgblade02 kernel: cpu 1 cold: high 62, batch 15 used:49
>>> May 6 22:31:29 pgblade02 kernel: cpu 2 hot: high 186, batch 31 used:12
>>> May 6 22:31:29 pgblade02 kernel: cpu 2 cold: high 62, batch 15 used:14
>>> May 6 22:31:29 pgblade02 kernel: cpu 3 hot: high 186, batch 31 used:50
>>> May 6 22:31:29 pgblade02 kernel: cpu 3 cold: high 62, batch 15 used:60
>>> May 6 22:31:29 pgblade02 kernel: Node 0 Normal per-cpu:
>>> May 6 22:31:29 pgblade02 kernel: cpu 0 hot: high 186, batch 31 used:5
>>> May 6 22:31:29 pgblade02 kernel: cpu 0 cold: high 62, batch 15 used:48
>>> May 6 22:31:29 pgblade02 kernel: cpu 1 hot: high 186, batch 31 used:11
>>> May 6 22:31:29 pgblade02 kernel: cpu 1 cold: high 62, batch 15 used:39
>>> May 6 22:31:29 pgblade02 kernel: cpu 2 hot: high 186, batch 31 used:14
>>> May 6 22:31:29 pgblade02 kernel: cpu 2 cold: high 62, batch 15 used:57
>>> May 6 22:31:29 pgblade02 kernel: cpu 3 hot: high 186, batch 31 used:94
>>> May 6 22:31:29 pgblade02 kernel: cpu 3 cold: high 62, batch 15 used:36
>>> May 6 22:31:29 pgblade02 kernel: Node 0 HighMem per-cpu: empty
>>> May 6 22:31:29 pgblade02 kernel: Free pages: 41788kB (0kB
>>> HighMem)
>>> May 6 22:31:29 pgblade02 kernel: Active:974250 inactive:920579
>>> dirty:0 writeback:0 unstable:0 free:10447 slab:11470 mapped-file:985
>>> mapped-anon:1848625 pagetables:111027
>>> May 6 22:31:29 pgblade02 kernel: Node 0 DMA free:11172kB min:12kB
>>> low:12kB high:16kB active:0kB inactive:0kB present:10816kB
>>> pages_scanned:0 all_unreclaimable? yes
>>> May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 3254 8052 8052
>>> May 6 22:31:29 pgblade02 kernel: Node 0 DMA32 free:23804kB
>>> min:4636kB low:5792kB high:6952kB active:1555260kB
>>> inactive:1566144kB present:3332668kB pages_scanned:35703257
>>> all_unreclaimable? yes
>>> May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 0 4797 4797
>>> May 6 22:31:29 pgblade02 kernel: Node 0 Normal free:6812kB
>>> min:6836kB low:8544kB high:10252kB active:2342332kB
>>> inactive:2115836kB present:4912640kB pages_scanned:10165709
>>> all_unreclaimable? yes
>>> May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 0 0 0
>>> May 6 22:31:29 pgblade02 kernel: Node 0 HighMem free:0kB min:128kB
>>> low:128kB high:128kB active:0kB inactive:0kB present:0kB
>>> pages_scanned:0 all_unreclaimable? no
>>> May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 0 0 0
>>> May 6 22:31:29 pgblade02 kernel: Node 0 DMA: 3*4kB 5*8kB 3*16kB
>>> 6*32kB 4*64kB 3*128kB 0*256kB 0*512kB 2*1024kB 0*2048kB 2*4096kB =
>>> 11172kB
>>> May 6 22:31:29 pgblade02 kernel: Node 0 DMA32: 27*4kB 0*8kB 1*16kB
>>> 0*32kB 2*64kB 4*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 5*4096kB =
>>> 23804kB
>>> May 6 22:31:29 pgblade02 ker
>>> if it asks for more memory than is actually available.
>>> nel: Node 0 Normal: 21*4kB 9*8kB 26*16kB 3*32kB 6*64kB 5*128kB
>>> 0*256kB 0*512kB 1*1024kB 0*2048kB 1*4096kB = 6812kB
>>> May 6 22:31:29 pgblade02 kernel: Node 0 HighMem: empty
>>> May 6 22:31:29 pgblade02 kernel: Swap cache: add 71286821, delete
>>> 71287152, find 207780333/216904318, race 1387+10506
>>> May 6 22:31:29 pgblade02 kernel: Free swap = 0kB
>>> May 6 22:31:30 pgblade02 kernel: Total swap = 8388600kB
>>> May 6 22:31:30 pgblade02 kernel: Free swap: 0kB
>>> May 6 22:31:30 pgblade02 kernel: 2293759 pages of RAM
>>> May 6 22:31:30 pgblade02 kernel: 249523 reserved pages
>>> May 6 22:31:30 pgblade02 kernel: 56111 pages shared
>>> May 6 22:31:30 pgblade02 kernel: 260 pages swap cached
>>> May 6 22:31:30 pgblade02 kernel: Out of memory: Killed process
>>> 29076 (postgres).
>>>
>>>
>>> We get folloowing errors in the postgres log:
>>>
>>> A couple of time:
>>> 2010-05-06 22:26:28 CEST [23001]: [2-1] WARNING: worker took too
>>> long to start; cancelled
>>> Then:
>>> 2010-05-06 22:31:21 CEST [29059]: [27-1] LOG: system logger process
>>> (PID 29076) was terminated by signal 9: Killed
>>> Finally:
>>> 2010-05-06 22:50:20 CEST [29059]: [28-1] LOG: background writer
>>> process (PID 22999) was terminated by signal 9: Killed
>>> 2010-05-06 22:50:20 CEST [29059]: [29-1] LOG: terminating any other
>>> active server processes
>>>
>>> Any help higly apprecaited,
>>>
>>> ---
>>>
>>
>> Hello Silvio,
>>
>> Is this machine dedicated to PostgreSQL?
>>
>> If so, I'd recommend adding these two parameters to your sysctl.conf
>>
>> vm.overcommit_memory = 2
>> vm.overcommit_ratio = 0
>>
>> So that OOMKiller is turned off.
>>
>> PostgreSQL should gracefully degrade if a malloc() fails because it
>> asks for too much memory.
>>
>> Hope that helps. =)
>>
>> Regards,
>> Lacey
>>
>>
>
> Thanks a lot,
> yes the server is dedicated to PostgreSQL.
>
> Could be a bug of PostgreSQL the fact that the system went Out of
> Memory?? Wath can be the cause of it?
>
> Regards,
> Silvio
>
---
Utilizziamo i dati personali che la riguardano esclusivamente per nostre finalità amministrative e contabili, anche quando li comunichiamo a terzi. Informazioni dettagliate, anche in ordine al Suo diritto di accesso e agli altri Suoi diritti, sono riportate alla pagina http://www.savinodelbene.com/news/privacy.html
Se avete ricevuto questo messaggio per errore Vi preghiamo di ritornarlo al mittente eliminandolo assieme agli eventuali allegati, ai sensi art. 616 codice penale http://www.savinodelbene.com/codice_penale_616.html
L'Azienda non si assume alcuna responsabilità giuridica qualora pervengano da questo indirizzo messaggi estranei all'attività lavorativa o contrari a norme.
--
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Re: postgres invoked oom-killer
am 07.05.2010 17:49:43 von Silvio Brandani
Silvio Brandani ha scritto:
> Greg Spiegelberg ha scritto:
>> On Fri, May 7, 2010 at 8:26 AM, Silvio Brandani
>> >
>> wrote:
>>
>> We have a postgres 8.3.8 on linux
>>
>> We get following messages int /var/log/messages:
>>
>> May 6 22:31:01 pgblade02 kernel: postgres invoked oom-killer:
>> gfp_mask=0x201d2, order=0, oomkilladj=0
>>
>> *** snip ***
>>
>> May 6 22:31:30 pgblade02 kernel: Out of memory: Killed process
>> 29076 (postgres).
>>
>>
>> Silvio,
>>
>> Is this system a virtual machine?
>>
>> Greg
>>
> No is not,
>
> is up and runnig from 4 months , 60 G of data in 9 different
> databases. Lately we import a new schema in one of those databases .
>
> Silvio
>
No is not,
is up and runnig from 4 months , 60 G of data in 9 different databases.
Lately we import a new schema in one of those databases .
Silvio
---
Utilizziamo i dati personali che la riguardano esclusivamente per nostre finalità amministrative e contabili, anche quando li comunichiamo a terzi. Informazioni dettagliate, anche in ordine al Suo diritto di accesso e agli altri Suoi diritti, sono riportate alla pagina http://www.savinodelbene.com/news/privacy.html
Se avete ricevuto questo messaggio per errore Vi preghiamo di ritornarlo al mittente eliminandolo assieme agli eventuali allegati, ai sensi art. 616 codice penale http://www.savinodelbene.com/codice_penale_616.html
L'Azienda non si assume alcuna responsabilità giuridica qualora pervengano da questo indirizzo messaggi estranei all'attività lavorativa o contrari a norme.
--
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Re: postgres invoked oom-killer
am 07.05.2010 18:39:52 von Lacey Powers
Silvio Brandani wrote:
> Lacey Powers ha scritto:
>> Silvio Brandani wrote:
>>> We have a postgres 8.3.8 on linux
>>>
>>> We get following messages int /var/log/messages:
>>>
>>> May 6 22:31:01 pgblade02 kernel: postgres invoked oom-killer:
>>> gfp_mask=0x201d2, order=0, oomkilladj=0
>>> May 6 22:31:01 pgblade02 kernel:
>>> May 6 22:31:01 pgblade02 kernel: Call Trace:
>>> May 6 22:31:19 pgblade02 kernel: []
>>> out_of_memory+0x8e/0x2f5
>>> May 6 22:31:19 pgblade02 kernel: []
>>> __alloc_pages+0x22b/0x2b4
>>> May 6 22:31:19 pgblade02 kernel: []
>>> __do_page_cache_readahead+0x95/0x1d9
>>> May 6 22:31:19 pgblade02 kernel: []
>>> __wait_on_bit_lock+0x5b/0x66
>>> May 6 22:31:19 pgblade02 kernel: []
>>> :dm_mod:dm_any_congested+0x38/0x3f
>>> May 6 22:31:19 pgblade02 kernel: []
>>> filemap_nopage+0x148/0x322
>>> May 6 22:31:19 pgblade02 kernel: []
>>> __handle_mm_fault+0x1f8/0xdf4
>>> May 6 22:31:19 pgblade02 kernel: []
>>> do_page_fault+0x4b8/0x81d
>>> May 6 22:31:19 pgblade02 kernel: []
>>> thread_return+0x0/0xeb
>>> May 6 22:31:19 pgblade02 kernel: []
>>> error_exit+0x0/0x84
>>> May 6 22:31:27 pgblade02 kernel:
>>> May 6 22:31:28 pgblade02 kernel: Mem-info:
>>> May 6 22:31:28 pgblade02 kernel: Node 0 DMA per-cpu:
>>> May 6 22:31:28 pgblade02 kernel: cpu 0 hot: high 0, batch 1 used:0
>>> May 6 22:31:28 pgblade02 kernel: cpu 0 cold: high 0, batch 1 used:0
>>> May 6 22:31:28 pgblade02 kernel: cpu 1 hot: high 0, batch 1 used:0
>>> May 6 22:31:28 pgblade02 kernel: cpu 1 cold: high 0, batch 1 used:0
>>> May 6 22:31:28 pgblade02 kernel: cpu 2 hot: high 0, batch 1 used:0
>>> May 6 22:31:28 pgblade02 kernel: cpu 2 cold: high 0, batch 1 used:0
>>> May 6 22:31:28 pgblade02 kernel: cpu 3 hot: high 0, batch 1 used:0
>>> May 6 22:31:28 pgblade02 kernel: cpu 3 cold: high 0, batch 1 used:0
>>> May 6 22:31:28 pgblade02 kernel: Node 0 DMA32 per-cpu:
>>> May 6 22:31:28 pgblade02 kernel: cpu 0 hot: high 186, batch 31 used:27
>>> May 6 22:31:29 pgblade02 kernel: cpu 0 cold: high 62, batch 15 used:54
>>> May 6 22:31:29 pgblade02 kernel: cpu 1 hot: high 186, batch 31 used:23
>>> May 6 22:31:29 pgblade02 kernel: cpu 1 cold: high 62, batch 15 used:49
>>> May 6 22:31:29 pgblade02 kernel: cpu 2 hot: high 186, batch 31 used:12
>>> May 6 22:31:29 pgblade02 kernel: cpu 2 cold: high 62, batch 15 used:14
>>> May 6 22:31:29 pgblade02 kernel: cpu 3 hot: high 186, batch 31 used:50
>>> May 6 22:31:29 pgblade02 kernel: cpu 3 cold: high 62, batch 15 used:60
>>> May 6 22:31:29 pgblade02 kernel: Node 0 Normal per-cpu:
>>> May 6 22:31:29 pgblade02 kernel: cpu 0 hot: high 186, batch 31 used:5
>>> May 6 22:31:29 pgblade02 kernel: cpu 0 cold: high 62, batch 15 used:48
>>> May 6 22:31:29 pgblade02 kernel: cpu 1 hot: high 186, batch 31 used:11
>>> May 6 22:31:29 pgblade02 kernel: cpu 1 cold: high 62, batch 15 used:39
>>> May 6 22:31:29 pgblade02 kernel: cpu 2 hot: high 186, batch 31 used:14
>>> May 6 22:31:29 pgblade02 kernel: cpu 2 cold: high 62, batch 15 used:57
>>> May 6 22:31:29 pgblade02 kernel: cpu 3 hot: high 186, batch 31 used:94
>>> May 6 22:31:29 pgblade02 kernel: cpu 3 cold: high 62, batch 15 used:36
>>> May 6 22:31:29 pgblade02 kernel: Node 0 HighMem per-cpu: empty
>>> May 6 22:31:29 pgblade02 kernel: Free pages: 41788kB (0kB
>>> HighMem)
>>> May 6 22:31:29 pgblade02 kernel: Active:974250 inactive:920579
>>> dirty:0 writeback:0 unstable:0 free:10447 slab:11470 mapped-file:985
>>> mapped-anon:1848625 pagetables:111027
>>> May 6 22:31:29 pgblade02 kernel: Node 0 DMA free:11172kB min:12kB
>>> low:12kB high:16kB active:0kB inactive:0kB present:10816kB
>>> pages_scanned:0 all_unreclaimable? yes
>>> May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 3254 8052 8052
>>> May 6 22:31:29 pgblade02 kernel: Node 0 DMA32 free:23804kB
>>> min:4636kB low:5792kB high:6952kB active:1555260kB
>>> inactive:1566144kB present:3332668kB pages_scanned:35703257
>>> all_unreclaimable? yes
>>> May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 0 4797 4797
>>> May 6 22:31:29 pgblade02 kernel: Node 0 Normal free:6812kB
>>> min:6836kB low:8544kB high:10252kB active:2342332kB
>>> inactive:2115836kB present:4912640kB pages_scanned:10165709
>>> all_unreclaimable? yes
>>> May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 0 0 0
>>> May 6 22:31:29 pgblade02 kernel: Node 0 HighMem free:0kB min:128kB
>>> low:128kB high:128kB active:0kB inactive:0kB present:0kB
>>> pages_scanned:0 all_unreclaimable? no
>>> May 6 22:31:29 pgblade02 kernel: lowmem_reserve[]: 0 0 0 0
>>> May 6 22:31:29 pgblade02 kernel: Node 0 DMA: 3*4kB 5*8kB 3*16kB
>>> 6*32kB 4*64kB 3*128kB 0*256kB 0*512kB 2*1024kB 0*2048kB 2*4096kB =
>>> 11172kB
>>> May 6 22:31:29 pgblade02 kernel: Node 0 DMA32: 27*4kB 0*8kB 1*16kB
>>> 0*32kB 2*64kB 4*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 5*4096kB =
>>> 23804kB
>>> May 6 22:31:29 pgblade02 ker
>>> if it asks for more memory than is actually available.
>>> nel: Node 0 Normal: 21*4kB 9*8kB 26*16kB 3*32kB 6*64kB 5*128kB
>>> 0*256kB 0*512kB 1*1024kB 0*2048kB 1*4096kB = 6812kB
>>> May 6 22:31:29 pgblade02 kernel: Node 0 HighMem: empty
>>> May 6 22:31:29 pgblade02 kernel: Swap cache: add 71286821, delete
>>> 71287152, find 207780333/216904318, race 1387+10506
>>> May 6 22:31:29 pgblade02 kernel: Free swap = 0kB
>>> May 6 22:31:30 pgblade02 kernel: Total swap = 8388600kB
>>> May 6 22:31:30 pgblade02 kernel: Free swap: 0kB
>>> May 6 22:31:30 pgblade02 kernel: 2293759 pages of RAM
>>> May 6 22:31:30 pgblade02 kernel: 249523 reserved pages
>>> May 6 22:31:30 pgblade02 kernel: 56111 pages shared
>>> May 6 22:31:30 pgblade02 kernel: 260 pages swap cached
>>> May 6 22:31:30 pgblade02 kernel: Out of memory: Killed process
>>> 29076 (postgres).
>>>
>>>
>>> We get folloowing errors in the postgres log:
>>>
>>> A couple of time:
>>> 2010-05-06 22:26:28 CEST [23001]: [2-1] WARNING: worker took too
>>> long to start; cancelled
>>> Then:
>>> 2010-05-06 22:31:21 CEST [29059]: [27-1] LOG: system logger process
>>> (PID 29076) was terminated by signal 9: Killed
>>> Finally:
>>> 2010-05-06 22:50:20 CEST [29059]: [28-1] LOG: background writer
>>> process (PID 22999) was terminated by signal 9: Killed
>>> 2010-05-06 22:50:20 CEST [29059]: [29-1] LOG: terminating any other
>>> active server processes
>>>
>>> Any help higly apprecaited,
>>>
>>> ---
>>>
>>
>> Hello Silvio,
>>
>> Is this machine dedicated to PostgreSQL?
>>
>> If so, I'd recommend adding these two parameters to your sysctl.conf
>>
>> vm.overcommit_memory = 2
>> vm.overcommit_ratio = 0
>>
>> So that OOMKiller is turned off.
>>
>> PostgreSQL should gracefully degrade if a malloc() fails because it
>> asks for too much memory.
>>
>> Hope that helps. =)
>>
>> Regards,
>> Lacey
>>
>>
>
> Thanks a lot,
> yes the server is dedicated to PostgreSQL.
>
> Could be a bug of PostgreSQL the fact that the system went Out of
> Memory?? Wath can be the cause of it?
>
> Regards,
> Silvio
>
Hello Silvio,
This isn't a bug in PostgreSQL.
OOMKiller is a OS level application, and is designed to free up memory
by terminating a low priority process.
So, something filled up the available memory in your server, and
OOMKiller then decided to ungracefully terminate PostgreSQL. =(
It's equivalent to sending a kill -9 to PostgreSQL, which is not
a good thing to do, ever.
If you have sar running (or other system resource logging), and pair
that data with other log data, you might be able to get an idea from
that, as to what might have caused the out of memory condition, if
you're interested.
But, since this is a dedicated machine, if you just add the parameters
to your sysctl.conf, this shouldn't happen again. =)
Hope that helps. =)
Regards,
Lacey
--
Lacey Powers
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 ext 104
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Re: postgres invoked oom-killer
am 10.05.2010 14:33:35 von Silvio Brandani
Giles Lean ha scritto:
> Silvio Brandani wrote:
>
>
>> yes the server is dedicated to PostgreSQL.
>>
>> Could be a bug of PostgreSQL the fact that the system went Out of
>> Memory?? Wath can be the cause of it?
>>
>> Regards,
>> Silvio
>>
>
> The out of memory killer is a totally bogus misfeature of
> Linux. At least these days they let you turn it off, and
> everyone should turn it off _and_ should bug their OS vendor
> to request that they ship their distribution with memory
> overcommit and the out of memory killer disabled.
>
> The "out of memory" killer only exists because Linux by
> default will allow more memory to be allocated than exists in
> the system. This is called "memory overcommit" and is kown in
> operating system circles (outside of Linux) to be a bad thing.
>
> The rationale for memory overcommit being a bad thing is that
> if your OS doesn't allow allocation of more memory than there
> is in the system, then applications are forced to deal with
> memory requests that fail, instead of seeing those requests
> "succeed" and being killed some random time later.
>
> Allowing more memory to be "allocated" than exists in the
> machine is only useful in two circumstances that I know of,
> and neither apply to a dedicated PostgreSQL system, running
> Linux or not:
>
> 1. When badly written applications allocate far more memory
> than they use. PostgreSQL isn't like that.
>
> The technically correct solution is to fix the
> applications; allowing memory overcommit is a dangerous
> workaround that _will_ cause problems.
>
> 2. When very large processes need to fork (e.g. a scientific
> numerical analysis program using most of physical memory)
> might want to fork to send an email notification or
> something similar).
>
> The technically correct solution is to use vfork().
>
> I won't make myself popular with the Linux zealots, but memory
> overcommit should just be removed from Linux. It does way more
> harm than good on any operating system I've ever seen it used
> on, and the problems it has caused just on Linux are legion.
>
> The only OS that had _any_ excuse for memory overcommit I've
> seen was one that lacked a vfork() system call; they added
> that call and the justification for memory overcommit went
> away.
>
> Regards,
>
> Giles
>
>
Thanks a lot,
I' m going to disable the memory killer, moreover I have run the
Operative System watcher scripts to monitor the undergoing activity in
case it happens again.
Best Regards,
Silvio Brandani
--
Silvio Brandani
Infrastructure Administrator
SDB Information Technology
Phone: +39.055.3811222
Fax: +39.055.5201119
---
Utilizziamo i dati personali che la riguardano esclusivamente per nostre finalità amministrative e contabili, anche quando li comunichiamo a terzi. Informazioni dettagliate, anche in ordine al Suo diritto di accesso e agli altri Suoi diritti, sono riportate alla pagina http://www.savinodelbene.com/news/privacy.html
Se avete ricevuto questo messaggio per errore Vi preghiamo di ritornarlo al mittente eliminandolo assieme agli eventuali allegati, ai sensi art. 616 codice penale http://www.savinodelbene.com/codice_penale_616.html
L'Azienda non si assume alcuna responsabilità giuridica qualora pervengano da questo indirizzo messaggi estranei all'attività lavorativa o contrari a norme.
--
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Re: postgres invoked oom-killer
am 10.05.2010 15:52:20 von Greg Spiegelberg
--00c09fa83275b8cfca04863db7c3
Content-Type: text/plain; charset=ISO-8859-1
2010/5/7 Silvio Brandani
> Greg Spiegelberg ha scritto:
>
>
>> Is this system a virtual machine?
>>
>> Greg
>>
>> No is not,
>
> is up and runnig from 4 months , 60 G of data in 9 different databases.
> Lately we import a new schema in one of those databases .
>
>
Okay. I asked because I have seen VMware's balloon driver trigger the OOMK
in Linux killing and killing databases. It wasn't enough in these cases to
simply turn off OOMK. VMware has a setting for each VM to "reserve" memory
for that VM ensuring the balloon driver won't be used to a certain extent or
at all. Then and only then would disabling OOMK really work.
Greg
--00c09fa83275b8cfca04863db7c3
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
2010/5/7 Silvio Brandani
<<=
a href=3D"mailto:silvio.brandani@tech.sdb.it">silvio.brandani@ tech.sdb.it=
a>>
x solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Greg Spiegelberg ha scritto:
204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Is this system a virtual machine?
Greg
No is not,
is up and runnig from 4 months , 60 G of data in 9 different databases. Lat=
ely we import a new schema in one of those databases .
>
Okay.=A0 I asked because I have seen VMware=
39;s balloon driver trigger the OOMK in Linux killing and killing databases=
..=A0 It wasn't enough in these cases to simply turn off OOMK.=A0 VMware=
has a setting for each VM to "reserve" memory for that VM ensuri=
ng the balloon driver won't be used to a certain extent or at all.=A0 T=
hen and only then would disabling OOMK really work.
Greg
--00c09fa83275b8cfca04863db7c3--