Обсуждение: How can I known the size of a database, table by table ?

Поиск
Список
Период
Сортировка

How can I known the size of a database, table by table ?

От
Bruno Baguette
Дата:
Hello,

I works on a PostgreSQL 8.0.4 database, and I would like to known the
size (in bytes, Mbytes or anything else) of that database, for each table.

How can I do that ?

Thanks in advance !

--
Bruno BAGUETTE

Re: How can I known the size of a database, table by table

От
Leonel Nunez
Дата:
Bruno Baguette wrote:
> Hello,
>
> I works on a PostgreSQL 8.0.4 database, and I would like to known the
> size (in bytes, Mbytes or anything else) of that database, for each
> table.
>
> How can I do that ?
>
> Thanks in advance !
>


on contrib package or  contrib subdir from the source  is   dbsize


leonel


Re: How can I known the size of a database, table by table ?

От
Andreas Kretschmer
Дата:
Bruno Baguette <pgsql-ml@baguette.net> schrieb:

> Hello,
>
> I works on a PostgreSQL 8.0.4 database, and I would like to known the size
> (in bytes, Mbytes or anything else) of that database, for each table.
>
> How can I do that ?

I'm not sure if this functions available in 8.0:
If not, see in contrib.


http://www.postgresql.org/docs/8.1/interactive/functions-admin.html

=>

pg_relation_size(text) and other.



Andreas
--
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.                              (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."    (unknow)
Kaufbach, Saxony, Germany, Europe.              N 51.05082°, E 13.56889°

Re: How can I known the size of a database, table by table ?

От
"Shoaib Mir"
Дата:
The space occupied by your database and tables can be found this way:

# select oid from pg_database where datname='mybase';

 oid 
-------
 17239
(1 row)

$ du -hs data/base/17239
13.5 M

For getting a table size to get an approx size you can do the following:

SELECT relpages * 8192 AS size_in_bytes
FROM pg_class
WHERE

relnamespace = (SELECT oid FROM pg_namespace WHERE nspname = '<schema name>') AND

   relname = '<table name>';

/Shoaib



On 3/27/06, Bruno Baguette < pgsql-ml@baguette.net> wrote:
Hello,

I works on a PostgreSQL 8.0.4 database, and I would like to known the
size (in bytes, Mbytes or anything else) of that database, for each table.

How can I do that ?

Thanks in advance !

--
Bruno BAGUETTE

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Re: How can I known the size of a database, table by table

От
Steve Crawford
Дата:
Bruno Baguette wrote:
> Hello,
>
> I works on a PostgreSQL 8.0.4 database, and I would like to known the
> size (in bytes, Mbytes or anything else) of that database, for each table.
>
> How can I do that ?
>
> Thanks in advance !
>

This will probably work for you (in bytes - or you could divide by 8
instead for KB):

select relpages*8192 from pg_class where relname = 'your_table_name';

8k page size is typical but you may need to verify this for your system.
Also, there may be toast tables associated with your main table so you
could need to add their sizes to your total for the table.

See the manual section on monitoring disk usage for more
complete/accurate info:
http://www.postgresql.org/docs/8.1/interactive/diskusage.html


Cheers,
Steve

Re: How can I known the size of a database, table by table ?

От
Kevin Murphy
Дата:
Bruno Baguette wrote:
> I works on a PostgreSQL 8.0.4 database, and I would like to known the
> size (in bytes, Mbytes or anything else) of that database, for each
> table.
You've already got several replies.  Know that the PG documentation is
excellent.  If you visit www.postgresql.org, you will quickly find:
http://www.postgresql.org/docs/8.0/interactive/diskusage.html

The following won't help you, but just for completeness, in PG 8.1,
dbsize functionality is built in, so you can just do:

select tablename, pg_size_pretty(pg_total_relation_size(tablename)) as size
from pg_tables where schemaname in ('public') order by
pg_total_relation_size(tablename) desc;
select pg_size_pretty(pg_database_size(current_database())) AS TOTAL;

Also, replace or add to 'public' as required by the schemas you use.
The sizes reported include the indexes for each table.

The total includes not just your tables but the system tables also.
Unless you have collected lots of statistics on the database, the system
tables (pg_*) will probably only take up a few MB of space.

Note that pg_size_pretty() _rounds_ the raw byte numbers, so the pretty
results may understate the space requirements by half a unit.

If you want to exclude the system tables from the total database size,
then instead of pg_database_size(), you could use:

select pg_size_pretty(cast (sum(pg_total_relation_size(tablename)) as
bigint)) as size
from pg_tables where schemaname in ('public');

Kevin Murphy


Unexplained lock creating table

От
Wes
Дата:
I have a C application (libpq) that uses partitioning.  I create parent
tables 'header' and 'detail'.  The application reads opens multiple
connections, creates the child tables, and uses COPY to import the data:

  open connection 1
  begin
  create table header_1
  COPY into header_1

  open connection 2
  begin
  create table header_2
  COPY into header_2

  open connection 3
  begin
  create table header_3
  COPY into header_3

   [ potentially more connections ]

  end copy 1
  end copy 2
  end copy 3

  commit 1
  commit 2
  commit 3

After the database is initialized, I run the application.  It creates table
header_1 and initiates the copy (transaction still pending).  However, when
it tries to create table header_2, it hangs.  A ps shows

   postgres: test trace [local] COPY
   postgres: test trace [local] CREATE TABLE waiting

However, if I force table header_1 to be created outside the COPY
transaction (using psql, manually committing the transaction from within
gdb, etc.), then run the application, it works regardless of the number of
open connections/transactions.

I then drop all the child tables, leaving the parent table, and rerun the
application.  It again works for all connections.

The problem occurs only when the database has been freshly initialized and
no child table has ever existed.

I confirm this by:

  1. Reinitialize database.
  2. Run application.  Verify hang occurs.  I can rerun step 2 any number of
times and it continues to hang.
  3. Create header_dummy using psql
  4. Drop header_dummy
  5. Run application - works.

I can repeat this with the 'detail' table.  It is 100% reproducible.

What's going on?

Wes



Re: Unexplained lock creating table

От
Andrew - Supernews
Дата:
On 2006-04-22, Wes <wespvp@syntegra.com> wrote:
> I have a C application (libpq) that uses partitioning.  I create parent
> tables 'header' and 'detail'.  The application reads opens multiple
> connections, creates the child tables, and uses COPY to import the data:
>
>   open connection 1
>   begin
>   create table header_1

If this is the first child table of "header", which it presumably is, then
the pg_class row for "header" has to be updated to set relhassubclass. Since
you do not commit the create before continuing, this backend continues to
hold a row lock on the updated, uncommitted pg_class row while you do...

>   COPY into header_1
>
>   open connection 2
>   begin
>   create table header_2

At this point connection 2, which does not yet see the existence of header_1
and the updated pg-class row for "header" (since they are not yet committed
and are thus invisible even in SnapshotNow), _also_ believes it needs to
update the pg_class row for "header" for the same reason. However, the
update attempt immediately runs into the locked/uncommitted row belonging
to connection 1, and must therefore wait on the lock before proceeding...

(If in fact you allow connection 1 to complete and commit, the create table
in connections 2+ may then bail out with a "tuple concurrently updated"
error, since catalog updates don't generally have the recheck-after-lock
logic used for user queries in read-committed mode, so can't cope with the
fact that another connection updated the tuple. If connection 1 aborts the
transaction instead, then connection 2 can proceed.)

[...]
> However, if I force table header_1 to be created outside the COPY
> transaction (using psql, manually committing the transaction from within
> gdb, etc.), then run the application, it works regardless of the number of
> open connections/transactions.
>
> I then drop all the child tables, leaving the parent table, and rerun the
> application.  It again works for all connections.

relhassubclass isn't reset to false when all child tables of a parent
table are removed. So next time through there is no need to update the
pg_class row for the parent table.

So the simplest workaround is probably to ensure that you create at least
one partition in each table at the outset, before trying to actually load
any data. You've already discovered that this works, but at least you now
know why :-)

--
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

Re: Unexplained lock creating table

От
Wes
Дата:
> If this is the first child table of "header", which it presumably is, then
> the pg_class row for "header" has to be updated to set relhassubclass. Since
> you do not commit the create before continuing, this backend continues to
> hold a row lock on the updated, uncommitted pg_class row while you do...

[snip]

> relhassubclass isn't reset to false when all child tables of a parent
> table are removed. So next time through there is no need to update the
> pg_class row for the parent table.
>
> So the simplest workaround is probably to ensure that you create at least
> one partition in each table at the outset, before trying to actually load
> any data.

That's exactly what I was looking for.  Now I know it is safe to do a
'create table', 'drop table' in my database creation script to circumvent
the problem - or to be 100% safe should I not do the 'drop table'?.  I was
afraid I might have to move the 'create table' outside of the transactions.

Many thanks

Wes