Обсуждение: Scaleable DB structure for counters...

Поиск
Список
Период
Сортировка

Scaleable DB structure for counters...

От
Eci Souji
Дата:
So we've got a table called "books" and we want to build records of how
often each book is accessed and when.  How would you store such
information so that it wouldn't become a huge unmanageable table?
Before I go out trying to plan something like this I figured I'd ask and
see if anyone had any experience with such a beast.

One idea I had was to create a separate DB for these counters and create
a schema for each year.  Within each year schema I would create month
tables.  Then I'd write a function to hit whatever schema existed like,
ala...

SELECT * FROM public.get_counters(date, hour, book_id);

get_day_counters would break up the date and based on the year do a
select counters from "2006".may WHERE day=12 and book_id=37.  If hour
had a value it could do select counters from "2006".may where day=12 and
book_id=37 and hour=18.

Offline scripts would take care of generating and populating these
tables, as they'd be historical and never real-time.

Thoughts?  I'm hoping someone has done something similar and can point
me in the right direction.


- E




Re: Scaleable DB structure for counters...

От
"Harald Armin Massa"
Дата:
Eci,

the usual way is:

create table books (id_book serial, author text, title text ...)
create table access (id_access serial, id_book int4, timeofaccess timestamp,...)

then for every access you write 1 record to access.

A rough estimate: a book may be lent out every hour once, so that is 8544 records per year and book;

IF you expect that table gets "to big", you still can move over to inheritance:

create table access2006 inherits access
create table access2007 inherits access

and put rules on them to make sure the data goes into the correct table when you access only the access table. Google up "constraint exclusion" within the 8.1 release notes / the postgresql documentation.

Harald


On 7/16/06, Eci Souji <eci.souji@gmail.com> wrote:
So we've got a table called "books" and we want to build records of how
often each book is accessed and when.  How would you store such
information so that it wouldn't become a huge unmanageable table?
Before I go out trying to plan something like this I figured I'd ask and
see if anyone had any experience with such a beast.

One idea I had was to create a separate DB for these counters and create
a schema for each year.  Within each year schema I would create month
tables.  Then I'd write a function to hit whatever schema existed like,
ala...

SELECT * FROM public.get_counters(date, hour, book_id);

get_day_counters would break up the date and based on the year do a
select counters from "2006".may WHERE day=12 and book_id=37.  If hour
had a value it could do select counters from "2006".may where day=12 and
book_id=37 and hour=18.

Offline scripts would take care of generating and populating these
tables, as they'd be historical and never real-time.

Thoughts?  I'm hoping someone has done something similar and can point
me in the right direction.


- E




---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend



--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Reinsburgstraße 202b
70197 Stuttgart
0173/9409607
-
on different matter:
EuroPython 2006 is over. It was a GREAT conference. If you missed it, now you can prepare budget for visiting EuroPython 2007.

Re: Scaleable DB structure for counters...

От
Eci Souji
Дата:
What if instead of book checkouts we were looking at how often a book
was referenced?  In which case we're talking multiple times an hour, and
we could easily have each book requiring hundreds of thousands of rows.
  Multiply that by hundreds of thousands of books and a the table seems
to become huge quite quick.  Would breaking up the table by year still
make sense?  I'm just not familiar with having to deal with a table that
could easily hit millions of records.

Thanks for your reply,

- E


Harald Armin Massa wrote:
> Eci,
>
> the usual way is:
>
> create table books (id_book serial, author text, title text ...)
> create table access (id_access serial, id_book int4, timeofaccess
> timestamp,...)
>
> then for every access you write 1 record to access.
>
> A rough estimate: a book may be lent out every hour once, so that is
> 8544 records per year and book;
>
> IF you expect that table gets "to big", you still can move over to
> inheritance:
>
> create table access2006 inherits access
> create table access2007 inherits access
>
> and put rules on them to make sure the data goes into the correct table
> when you access only the access table. Google up "constraint exclusion"
> within the 8.1 release notes / the postgresql documentation.
>
> Harald
>
>
> On 7/16/06, *Eci Souji* <eci.souji@gmail.com
> <mailto:eci.souji@gmail.com>> wrote:
>
>     So we've got a table called "books" and we want to build records of how
>     often each book is accessed and when.  How would you store such
>     information so that it wouldn't become a huge unmanageable table?
>     Before I go out trying to plan something like this I figured I'd ask and
>     see if anyone had any experience with such a beast.
>
>     One idea I had was to create a separate DB for these counters and create
>     a schema for each year.  Within each year schema I would create month
>     tables.  Then I'd write a function to hit whatever schema existed like,
>     ala...
>
>     SELECT * FROM public.get_counters(date, hour, book_id);
>
>     get_day_counters would break up the date and based on the year do a
>     select counters from "2006".may WHERE day=12 and book_id=37.  If hour
>     had a value it could do select counters from "2006".may where day=12 and
>     book_id=37 and hour=18.
>
>     Offline scripts would take care of generating and populating these
>     tables, as they'd be historical and never real-time.
>
>     Thoughts?  I'm hoping someone has done something similar and can point
>     me in the right direction.
>
>
>     - E
>
>
>
>
>     ---------------------------(end of
>     broadcast)---------------------------
>     TIP 6: explain analyze is your friend
>
>
>
>
> --
> GHUM Harald Massa
> persuadere et programmare
> Harald Armin Massa
> Reinsburgstraße 202b
> 70197 Stuttgart
> 0173/9409607
> -
> on different matter:
> EuroPython 2006 is over. It was a GREAT conference. If you missed it,
> now you can prepare budget for visiting EuroPython 2007.



Re: Scaleable DB structure for counters...

От
"Harald Armin Massa"
Дата:
Eci,

I could not google them up quickly, but there are people dealing with tables with millions of records in PostgreSQL.

Per technical data the number of rows in a table is unlimited in PostgreSQL:
http://www.postgresql.org/about/

There may be performance-reasons to split up a table of that size, but still you can trust PostgreSQLs table inheritance together with constraint exclusion to deal with that: just inherit your tables on a monthly base:

create table access200601 inherits ....

and adjust your rules accordingly.

read up on this documentation for examples of table partitioning,

http://www.postgresql.org/docs/8.1/interactive/ddl-partitioning.html

what is the technical term for the method you are looking for.

Best wishes,

Harald


--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Reinsburgstraße 202b
70197 Stuttgart
0173/9409607
-
on different matter:
EuroPython 2006 is over. It was a GREAT conference. If you missed it, now you can prepare budget for visiting EuroPython 2007.

Re: Scaleable DB structure for counters...

От
"hubert depesz lubaczewski"
Дата:
On 7/16/06, Eci Souji <eci.souji@gmail.com> wrote:
So we've got a table called "books" and we want to build records of how
often each book is accessed and when.  How would you store such
information so that it wouldn't become a huge unmanageable table?
Before I go out trying to plan something like this I figured I'd ask and
see if anyone had any experience with such a beast.

from your other email i understand that you will deal with a milion or so records in access-list table.
the simplest approach would be not to divide it into multiple tables, but instead just add triger on access table to increment counters perbook.
simple and very effective.

depesz

Re: Scaleable DB structure for counters...

От
Christian Kratzer
Дата:
Hi,

On Sun, 16 Jul 2006, Eci Souji wrote:

> What if instead of book checkouts we were looking at how often a book was
> referenced?  In which case we're talking multiple times an hour, and we could
> easily have each book requiring hundreds of thousands of rows.  Multiply that
> by hundreds of thousands of books and a the table seems to become huge quite
> quick.  Would breaking up the table by year still make sense?  I'm just not
> familiar with having to deal with a table that could easily hit millions of
> records.

you might want to keep a separate table with counters per book
and per year or month which you regularly compute from your yearly
or month totals.

something like following untested code:

   INSERT INTO access_count
   SELECT id_book, date_trunc('day',timeofaccess) AS dayofaccess,count(id_book)
   FROM access
   WHERE date_trunc('day',timeofaccess) = date_trunc('day',now())
   GROUP BY id_book, dayofaccess

That way you do not need to count all the access records.
You just sum up the pre computed counts for each period.

   SELECT sum(count) FROM access_count WHERE id_book=?

You also have the option of throwing away the raw access data
for a certain day or month once that period of time is over.

This is more efficient than calling a trigger on each access and
also more scalable as there is no contention over a per book count
record.

Keeping the raw data in per month or year partitions is also propably
a good idea as it allows you to easily drop specific partitions.

Greetings
Christian

--
Christian Kratzer                       ck@cksoft.de
CK Software GmbH                        http://www.cksoft.de/
Phone: +49 7452 889 135                 Fax: +49 7452 889 136

Re: Scaleable DB structure for counters...

От
Ron Johnson
Дата:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Eci Souji wrote:
> What if instead of book checkouts we were looking at how often a book
> was referenced?  In which case we're talking multiple times an hour, and
> we could easily have each book requiring hundreds of thousands of rows.
>  Multiply that by hundreds of thousands of books and a the table seems
> to become huge quite quick.  Would breaking up the table by year still
> make sense?  I'm just not familiar with having to deal with a table that
> could easily hit millions of records.

Are all 200000 books accessed every hour?  What kind of library is
this?  Do you have robot librarians moving at hyperspeed?  Wouldn't
a more reasonable value be 5000 books per *day*?

It's easy to know when a book is checked out.  How do you know when
a book is referenced?  Are all books only accessed by the librarians?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEumvvS9HxQb37XmcRAg0wAKDOYZWThOFbIK3MWJw9RoD6Ql3zawCfdrBf
k3GCt4HEvMtrxfQMQM2Wv9M=
=fW2P
-----END PGP SIGNATURE-----

Re: Scaleable DB structure for counters...

От
Eci Souji
Дата:
I think "books" may have thrown everyone for a loop.  These are not
physical books, but rather complete scanned collections that would be
available for search and reference online.  One of the most important
features required would be keeping track of how often each book was
referenced and when.  Time of day, days of week, etc.  This is why I was
looking into how to construct some form of counter system that would
allow us to keep track of accesses.

Although I would love to see a robot librarian at work.  :-)

- E

Ron Johnson wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Eci Souji wrote:
>
>>What if instead of book checkouts we were looking at how often a book
>>was referenced?  In which case we're talking multiple times an hour, and
>>we could easily have each book requiring hundreds of thousands of rows.
>> Multiply that by hundreds of thousands of books and a the table seems
>>to become huge quite quick.  Would breaking up the table by year still
>>make sense?  I'm just not familiar with having to deal with a table that
>>could easily hit millions of records.
>
>
> Are all 200000 books accessed every hour?  What kind of library is
> this?  Do you have robot librarians moving at hyperspeed?  Wouldn't
> a more reasonable value be 5000 books per *day*?
>
> It's easy to know when a book is checked out.  How do you know when
> a book is referenced?  Are all books only accessed by the librarians?
>
> - --
> Ron Johnson, Jr.
> Jefferson LA  USA
>
> Is "common sense" really valid?
> For example, it is "common sense" to white-power racists that
> whites are superior to blacks, and that those with brown skins
> are mud people.
> However, that "common sense" is obviously wrong.
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.3 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iD8DBQFEumvvS9HxQb37XmcRAg0wAKDOYZWThOFbIK3MWJw9RoD6Ql3zawCfdrBf
> k3GCt4HEvMtrxfQMQM2Wv9M=
> =fW2P
> -----END PGP SIGNATURE-----
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: Have you checked our extensive FAQ?
>
>                http://www.postgresql.org/docs/faq
>



Re: Scaleable DB structure for counters...

От
Ron Johnson
Дата:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

IOW, files.  No problem.

The # of files is known.  That's a start.  Is there any existing
metric as to how often they are accessed?  That's what you need to
know before deciding on a design.

This simple design might be perfectly feasible:
CREATE TABLE T_USAGE_TXN (
BOOK_ID        INTEGER,
USER_ID        INTEGER,
REFERENCED_DT    DATE,
REFERENCED_TM    TIME )

*All* the rows (in field order) would be the PK, and I'd then add
secondary indexes on
  USER_ID/BOOK_ID
  REFERENCED_DT/BOOK_ID
  REFERENCED_DT/USER_ID

Lastly, create and algorithmically *pre-populate* this table :
T_CALENDAR (
DATE_ANSI    DATE,
YEARNUM        SMALLINT,
MONTH_NUM    SMALLINT,
DAY_OF_MONTH    SMALLINT,
DAY_OF_WEEK    SMALLINT,
JULIAN_DAY    SMALLINT)

So, if you want a list and count of all books that were referenced
on Sundays in 2006:

SELECT UT.BOOK_ID, COUNT(*)
FROM T_USAGE_COUNT UT,
     T_CALENDAR C
WHERE C.YEARNUM = 2006
  AND C.DAY_OF_WEEK = 0
  AND C.DATE_ANSI = UT.REFERENCED_DT;


Eci Souji wrote:
> I think "books" may have thrown everyone for a loop.  These are
> not physical books, but rather complete scanned collections that
> would be available for search and reference online.  One of the
> most important features required would be keeping track of how
> often each book was referenced and when.  Time of day, days of
> week, etc.  This is why I was looking into how to construct some
> form of counter system that would allow us to keep track of
> accesses.
>
> Although I would love to see a robot librarian at work.  :-)
>
> - E
>
> Ron Johnson wrote: Eci Souji wrote:
>
>>>> What if instead of book checkouts we were looking at how
>>>> often a book was referenced?  In which case we're talking
>>>> multiple times an hour, and we could easily have each book
>>>> requiring hundreds of thousands of rows. Multiply that by
>>>> hundreds of thousands of books and a the table seems to
>>>> become huge quite quick.  Would breaking up the table by
>>>> year still make sense?  I'm just not familiar with having
>>>> to deal with a table that could easily hit millions of
>>>> records.
>
>
> Are all 200000 books accessed every hour?  What kind of library
> is this?  Do you have robot librarians moving at hyperspeed?
> Wouldn't a more reasonable value be 5000 books per *day*?
>
> It's easy to know when a book is checked out.  How do you know
> when a book is referenced?  Are all books only accessed by the
> librarians?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEuxlMS9HxQb37XmcRAmXCAJ42IBwCvaDMlfMsiJoPsELxL0e1QQCfUBWH
6M7o4n9q2CEKbYn/xgh6OnY=
=iQF3
-----END PGP SIGNATURE-----