Re: pgbench - allow to specify scale as a size

Поиск
Список
Период
Сортировка
От Fabien COELHO
Тема Re: pgbench - allow to specify scale as a size
Дата
Msg-id alpine.DEB.2.20.1802191811190.21372@lancre
обсуждение исходный текст
Ответ на Re: pgbench - allow to specify scale as a size  (Mark Wong <mark@2ndQuadrant.com>)
Ответы Re: pgbench - allow to specify scale as a size
Список pgsql-hackers
Hello Mark,

> What if we consider using ascii (utf8?) text file sizes as a reference
> point, something independent from the database?

Why not.

TPC-B basically specifies that rows (accounts, tellers, branches) are all 
padded to 100 bytes, thus we could consider (i.e. document) that 
--scale=SIZE refers to the amount of useful data hold, and warn that 
actual storage would add various overheads for page and row headers, free 
space at the end of pages, indexes...

Then one scale step is 100,000 accounts, 10 tellers and 1 branch, i.e.
100,011 * 100 ~ 9.5 MiB of useful data per scale step.

> I realize even if a flat file size can be used as a more consistent
> reference across platforms, it doesn't help with accurately determining
> the final data file sizes due to any architecture specific nuances or
> changes in the backend.  But perhaps it might still offer a little more
> meaning to be able to say "50 gigabytes of bank account data" rather
> than "10 million rows of bank accounts" when thinking about size over
> cardinality.

Yep.

Now the overhead is really 60-65%. Although the specification is 
unambiguous, but we still need some maths to know whether it fits in 
buffers or memory... The point of Karel regression is to take this into 
account.

Also, whether this option would be more admissible to Tom is still an open 
question. Tom?

-- 
Fabien.


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: extern keyword incorrectly used in some function definitions
Следующее
От: Jaime Casanova
Дата:
Сообщение: Re: unique indexes on partitioned tables