Operational performance: one big table versus many smaller tables

Lists: pgsql-general
From: David Wall <d(dot)wall(at)computer(dot)org>
To: pgsql-general(at)postgresql(dot)org
Subject: Operational performance: one big table versus many smaller tables
Date: 2009-10-26 16:46:45
Message-ID: [email protected]
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-general

If I have various record types that are "one up" records that are
structurally similar (same columns) and are mostly retrieved one at a
time by its primary key, is there any performance or operational benefit
to having millions of such records split across multiple tables (say by
their application-level purpose) rather than all in one big table?

I am thinking of PG performance (handing queries against multiple tables
each with hundreds of thousands or rows, versus queries against a single
table with millions of rows), and operational performance (number of WAL
files created, pg_dump, vacuum, etc.).

If anybody has any tips, I'd much appreciate it.

Thanks,
David


From: Richard Huxton <dev(at)archonet(dot)com>
To: David Wall <d(dot)wall(at)computer(dot)org>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Operational performance: one big table versus many smaller tables
Date: 2009-10-27 09:12:00
Message-ID: [email protected]
Views: Whole Thread | Raw Message | Download mbox | Resend email
Lists: pgsql-general

David Wall wrote:
> If I have various record types that are "one up" records that are
> structurally similar (same columns) and are mostly retrieved one at a
> time by its primary key, is there any performance or operational benefit
> to having millions of such records split across multiple tables (say by
> their application-level purpose) rather than all in one big table?

Probably doesn't matter if you're accessing by pkey (and hence index).
Certainly not when you're talking about a few million rows. Arrange your
tables so they have meaning and only change that if necessary.

--
Richard Huxton
Archonet Ltd