Обсуждение: No problem, only curiosity

Поиск
Список
Период
Сортировка

No problem, only curiosity

От
Loles
Дата:
Hello community!

Does anyone know why I am getting a result less than 2 million rows in this example?

postgres=# create table test_n_tup_live (id serial);
CREATE TABLE

postgres=# insert into test_n_tup_live select generate_series(1,1000000);
INSERT 0 1000000

postgres=# analyze test_n_tup_live ;
ANALYZE

postgres=# select n_live_tup, n_dead_tup from pg_stat_user_tables where relname = 'test_n_tup_live';

 n_live_tup | n_dead_tup
------------+------------
    1000000 |          0
(1 fila)

postgres=# update test_n_tup_live set id = id + 1;
UPDATE 1000000

postgres=# insert into test_n_tup_live select generate_series(1,1000000);
INSERT 0 1000000

postgres=# select n_live_tup, n_dead_tup from pg_stat_user_tables where relname = 'test_n_tup_live';
 n_live_tup | n_dead_tup
------------+------------
    2000000 |    1000000
(1 fila)

postgres=# vacuum test_n_tup_live;
VACUUM

postgres=# select n_live_tup, n_dead_tup from pg_stat_user_tables where relname = 'test_n_tup_live';

 n_live_tup | n_dead_tup
------------+------------
    1999925 |          0
(1 fila)

Thanks!

Re: No problem, only curiosity

От
Erik Wienhold
Дата:
> On 24/01/2023 14:06 CET Loles <lolesft@gmail.com> wrote:
>
> Does anyone know why I am getting a result less than 2 million rows in this
> example?
>
> postgres=# create table test_n_tup_live (id serial);
> CREATE TABLE
>
> postgres=# insert into test_n_tup_live select generate_series(1,1000000);
> INSERT 0 1000000
>
> postgres=# analyze test_n_tup_live ;
> ANALYZE
>
> postgres=# select n_live_tup, n_dead_tup from pg_stat_user_tables where relname = 'test_n_tup_live';
>
>  n_live_tup | n_dead_tup
> ------------+------------
>  1000000 | 0
> (1 fila)
>
> postgres=# update test_n_tup_live set id = id + 1;
> UPDATE 1000000
>
> postgres=# insert into test_n_tup_live select generate_series(1,1000000);
> INSERT 0 1000000
>
> postgres=# select n_live_tup, n_dead_tup from pg_stat_user_tables where relname = 'test_n_tup_live';
>  n_live_tup | n_dead_tup
> ------------+------------
>  2000000 | 1000000
> (1 fila)
>
> postgres=# vacuum test_n_tup_live;
> VACUUM
>
> postgres=# select n_live_tup, n_dead_tup from pg_stat_user_tables where relname = 'test_n_tup_live';
>
>  n_live_tup | n_dead_tup
> ------------+------------
>  1999925 | 0
> (1 fila)

Both numbers are estimates:
https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ALL-TABLES-VIEW

--
Erik



Re: No problem, only curiosity

От
Loles
Дата:
Oh yes.. sorry and thank you.

El mar, 24 ene 2023 a las 14:51, Erik Wienhold (<ewie@ewie.name>) escribió:
> On 24/01/2023 14:06 CET Loles <lolesft@gmail.com> wrote:
>
> Does anyone know why I am getting a result less than 2 million rows in this
> example?
>
> postgres=# create table test_n_tup_live (id serial);
> CREATE TABLE
>
> postgres=# insert into test_n_tup_live select generate_series(1,1000000);
> INSERT 0 1000000
>
> postgres=# analyze test_n_tup_live ;
> ANALYZE
>
> postgres=# select n_live_tup, n_dead_tup from pg_stat_user_tables where relname = 'test_n_tup_live';
>
>  n_live_tup | n_dead_tup
> ------------+------------
>  1000000 | 0
> (1 fila)
>
> postgres=# update test_n_tup_live set id = id + 1;
> UPDATE 1000000
>
> postgres=# insert into test_n_tup_live select generate_series(1,1000000);
> INSERT 0 1000000
>
> postgres=# select n_live_tup, n_dead_tup from pg_stat_user_tables where relname = 'test_n_tup_live';
>  n_live_tup | n_dead_tup
> ------------+------------
>  2000000 | 1000000
> (1 fila)
>
> postgres=# vacuum test_n_tup_live;
> VACUUM
>
> postgres=# select n_live_tup, n_dead_tup from pg_stat_user_tables where relname = 'test_n_tup_live';
>
>  n_live_tup | n_dead_tup
> ------------+------------
>  1999925 | 0
> (1 fila)

Both numbers are estimates: https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ALL-TABLES-VIEW

--
Erik