Re: Where are my tables physically in the fs?

Поиск
Список
Период
Сортировка
От Medi Montaseri
Тема Re: Where are my tables physically in the fs?
Дата
Msg-id 3DFE3D82.5030907@intransa.com
обсуждение исходный текст
Ответ на Where are my tables physically in the fs?  (Egyud Csaba <csegyud@freemail.hu>)
Список pgsql-general
This reminds me of 'use English' in Perl language...
Perhaps someone could write a program that creates
$PGDATA/Base (note B is in uppercase)
which contains
$PGDATA/Base/student_table --> ../base/12563

And so on and so forth...ie bunch of sym links

Egyud Csaba wrote:

>Hi!
>
>I konow, it is a lamer question, but the doc doesn't inculde any details
>reguarding the physical storage of db. It is not seriously important for me
>just I would like to know.
>I use pg7.2.3 on Red Hat 7.1. The provious verson I used was 7.0.3, and it
>was easy to find my tables in the fs. The file names told me something -
>nearly everything. But now what I can find are only nubers and numbers under
>my $PGDATA/base directory.
>
>Once more it isn't too important. If somebody has some time please send me a
>url where I can find some more info.
>
>Thanks, and best reguards
>
>Csaba
>
>
>----- Original Message -----
>From: <pgsql-general-owner@postgresql.org>
>To: <pgsql-general@postgresql.org>
>Sent: Friday, December 13, 2002 6:51 PM
>Subject: [pgsql-general] Daily Digest V1 #2863
>
>
>
>
>>Daily Digest
>>Volume 1 : Issue 2863 : "text" Format
>>
>>Messages in this Issue:
>>  Re: INDEX suggestion needed
>>  Re: INDEX suggestion needed
>>  \dD Bug??
>>  Re: \dD Bug??
>>  Re: INDEX suggestion needed
>>  Re: Urgent need of (paid) PostgreSQL support in New
>>  Re: \dD Bug??
>>  Re: Copy/foreign key contraints
>>  getting datatype of array elements
>>  Re: getting datatype of array elements
>>
>>----------------------------------------------------------------------
>>
>>Date: Fri, 13 Dec 2002 18:13:49 +0100
>>From: Manfred Koizar <mkoi-pg@aon.at>
>>To: Thomas Beutin <tyrone@laokoon.IN-Berlin.DE>
>>Cc: pgsql-general@postgresql.org
>>Subject: Re: INDEX suggestion needed
>>Message-ID: <9d4kvusca8dakffi13s0m0no8um8ijs1bv@4ax.com>
>>
>>On Fri, 13 Dec 2002 16:41:38 +0100, Thomas Beutin
>><tyrone@laokoon.IN-Berlin.DE> wrote:
>>
>>
>>>>>(visit >= '2002-10-01' AND visit <= '2002-10-31') index scan, but long
>>>>>
>>>>>
>(>5sec)
>
>
>>>>Ca. 3%; interesting that it takes so long; I'd like to see EXPLAIN
>>>>
>>>>
>>Oops!  Should be 10% according to the last two histogram bounds:
>>
>>
>>>>>          "2002-09-29 09:09:31+02"
>>>>>          "2002-10-29 23:25:13+01"
>>>>>
>>>>>
>>>>ANALYZE output for enable_seqscan on and off.
>>>>
>>>>
>>>There is no difference in cost.
>>>
>>>
>>Oops again!  If it's already using an index scan, switching seqscan
>>off won't change anything.  I should have meant "for enable_indexscan
>>on and off".
>>
>>
>>
>>
>>>>The negative correlation looks strange.  How did you insert your data?
>>>>
>>>>
>>>It is a dump from the production system, and the production system gets
>>>the data once a day from webserver logs line by line.
>>>
>>>
>>Did you insert in reverse order (newest first)?
>>
>>| most_common_freqs = {0.000666667,...,0.000666667}
>>
>>These values occurred exactly twice in the analyzer's sample of 3000
>>values, so for the accuracy we need here it's ok to ignore them.
>>
>>Servus
>> Manfred
>>
>>------------------------------
>>
>>Date: Fri, 13 Dec 2002 18:37:59 +0100
>>From: Manfred Koizar <mkoi-pg@aon.at>
>>To: Alvaro Herrera <alvherre@dcc.uchile.cl>
>>Cc: Thomas Beutin <tyrone@laokoon.IN-Berlin.DE>,
>>pgsql-general@postgresql.org
>>Subject: Re: INDEX suggestion needed
>>Message-ID: <ja5kvuo09052ldtp94maadc7shmhr5vg6t@4ax.com>
>>
>>On Fri, 13 Dec 2002 13:00:14 -0300, Alvaro Herrera
>><alvherre@dcc.uchile.cl> wrote:
>>
>>
>>>Now this catched my attention (in the questions' side, sorry, not the
>>>answers').  Why the aggregate takes 10 times the time needed for the
>>>indexscan?
>>>
>>>
>>Good point!
>>
>>
>>
>>> One would think that a function like count() should be
>>>pretty cheap,
>>>
>>>
>>COUNT is cheap.  But COUNT(DISTINCT something) is not trivial, it has
>>to keep a list of all values it has already counted.  I didn't look at
>>the implementation.  Do we have O(n^2) cost here?
>>
>>Thomas, could you EXPLAIN ANALYZE some test cases with
>>
>>SELECT COUNT(*) FROM (
>>SELECT DISTINCT a_id
>>  FROM stat_pages
>>WHERE ...
>>) AS x;
>>
>>and compare them to the results of SELECT COUNT(DISTINCT ...)?
>>
>>So now you are back where you started.  At least you have an index on
>>"visit" now ;-)
>>
>>Servus
>> Manfred
>>
>>------------------------------
>>
>>Date: Fri, 13 Dec 2002 09:57:47 -0800
>>From: Steve Crawford <scrawford@pinpointresearch.com>
>>To: pgsql-general@postgresql.org
>>Subject: \dD Bug??
>>Message-ID: <20021213175747.859CF103BD@polaris.pinpointresearch.com>
>>
>>Is this a known bug? I created a database (test) and two schemas (s1 and
>>
>>
>s2)
>
>
>>each of which have one table (s1.test and s2.test) but when I use \dD I
>>
>>
>see
>
>
>>no schemas listed:
>>
>>test=# select * from s1.test;
>>        x
>>-----------------
>> I am in s1.test
>>(1 row)
>>
>>test=# select * from s2.test;
>>        x
>>-----------------
>> I am in s2.test
>>(1 row)
>>
>>test=# \dD
>>         List of domains
>> Schema | Name | Type | Modifier
>>--------+------+------+----------
>>(0 rows)
>>
>>Cheers,
>>Steve
>>
>>------------------------------
>>
>>Date: Fri, 13 Dec 2002 13:16:28 -0500
>>From: Tom Lane <tgl@sss.pgh.pa.us>
>>To: Steve Crawford <scrawford@pinpointresearch.com>
>>Cc: pgsql-general@postgresql.org
>>Subject: Re: \dD Bug??
>>Message-ID: <5053.1039803388@sss.pgh.pa.us>
>>
>>Steve Crawford <scrawford@pinpointresearch.com> writes:
>>
>>
>>>Is this a known bug? I created a database (test) and two schemas (s1 and
>>>
>>>
>s2)
>
>
>>>each of which have one table (s1.test and s2.test) but when I use \dD I
>>>
>>>
>see
>
>
>>>no schemas listed:
>>>
>>>
>>\dD is for domains, not schemas.  There isn't a \d command for schemas
>>in 7.3.
>>
>>regards, tom lane
>>
>>------------------------------
>>
>>Date: Fri, 13 Dec 2002 13:23:50 -0500
>>From: Tom Lane <tgl@sss.pgh.pa.us>
>>To: Manfred Koizar <mkoi-pg@aon.at>
>>Cc: Alvaro Herrera <alvherre@dcc.uchile.cl>,
>>Thomas Beutin <tyrone@laokoon.IN-Berlin.DE>,
>>pgsql-general@postgresql.org
>>Subject: Re: INDEX suggestion needed
>>Message-ID: <6412.1039803830@sss.pgh.pa.us>
>>
>>Manfred Koizar <mkoi-pg@aon.at> writes:
>>
>>
>>>COUNT is cheap.  But COUNT(DISTINCT something) is not trivial, it has
>>>to keep a list of all values it has already counted.  I didn't look at
>>>the implementation.  Do we have O(n^2) cost here?
>>>
>>>
>>No, more like O(n ln n) --- it's a sort/uniq implementation.
>>
>>regards, tom lane
>>
>>------------------------------
>>
>>Date: 13 Dec 2002 13:40:19 -0500
>>From: Vivek Khera <khera@kcilink.com>
>>To: pgsql-general@postgresql.org
>>Subject: Re: Urgent need of (paid) PostgreSQL support in New
>>Message-ID: <x7lm2t69v0.fsf@onceler.kciLink.com>
>>
>>
>>
>>>>>>>"TL" == Tom Lane <tgl@sss.pgh.pa.us> writes:
>>>>>>>
>>>>>>>
>>TL> There are varying opinions about that.  Some say "push PG's
>>TL> shared-buffers setting as high as you can make it".  Some say "use a
>>TL> conservatively small shared-buffers setting and expect the kernel to
>>
>>
>use
>
>
>>TL> the rest of physical memory as kernel disk buffers".  But both camps
>>
>>I think that once your DB size gets big enough, there will be queries
>>that suck no matter what, because you have to scan through a boatload
>>of disk pages.  Basically, once your working set size is bigger than
>>the shared buffer space, you're hosed.  Making shared buffer space
>>bigger than 50% of RAM will suck, as Tom said.  I used to do that, now
>>I have about 25-30% of RAM as shared bufs.  It still sucks because the
>>data size is too big.  I've got the fastest disks I can get, and split
>>the data across multiple spindles using RAID5 (yes, I could probably
>>get faster with RAID 1+0 or something, but it is too late for that
>>now...)
>>
>>
>>--
>>=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
>>Vivek Khera, Ph.D.                Khera Communications, Inc.
>>Internet: khera@kciLink.com       Rockville, MD       +1-240-453-8497
>>AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/
>>
>>------------------------------
>>
>>Date: Fri, 13 Dec 2002 11:47:49 -0800
>>From: Steve Crawford <scrawford@pinpointresearch.com>
>>To: Tom Lane <tgl@sss.pgh.pa.us>
>>Cc: pgsql-general@postgresql.org
>>Subject: Re: \dD Bug??
>>Message-ID: <20021213194751.BCA27103BD@polaris.pinpointresearch.com>
>>
>>Oops, my mistake.
>>
>>-Steve
>>
>>On Friday 13 December 2002 10:16 am, Tom Lane wrote:
>>
>>
>>>Steve Crawford <scrawford@pinpointresearch.com> writes:
>>>
>>>
>>>>Is this a known bug? I created a database (test) and two schemas (s1
>>>>
>>>>
>and
>
>
>>>>s2) each of which have one table (s1.test and s2.test) but when I use
>>>>
>>>>
>\dD
>
>
>>>>I see no schemas listed:
>>>>
>>>>
>>>\dD is for domains, not schemas.  There isn't a \d command for schemas
>>>in 7.3.
>>>
>>>regards, tom lane
>>>
>>>---------------------------(end of broadcast)---------------------------
>>>TIP 2: you can get off all lists at once with the unregister command
>>>    (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>>>
>>>
>>------------------------------
>>
>>Date: Fri, 13 Dec 2002 15:35:32 -0700
>>From: "Ken Godee" <ken@perfect-image.com>
>>To: rstpierre@syscor.com, pgsql-general@postgresql.org
>>Subject: Re: Copy/foreign key contraints
>>Message-ID: <3DF9FE43.31079.43557A@localhost>
>>
>>Ron,
>>The file I'm COPYing is straight ascii data, and just delimited
>>no sql statements, I'm adding data to an existing table.
>>
>>ie.
>>------file-----------
>>25467^John Doe^480-555-1212^^12/13/2002
>>---------------------
>>
>>"COPY transx from '/tmp/transx.dat' delimiters '^' with null as '';
>>
>>I had thought this was pretty straight forward, I don't
>>have a very complex set up and thought I was just missing
>>something (and probally am) or this was a common problem.
>>Another poster suggested forwarding alittle more info and
>>schema to list, which I'll post a little later.
>>
>>
>>
>>
>>
>>>OOPS, correction as:
>>>Also, if names are NOT double quoted "custID" any reference to them get
>>>converted to lower case, so you're references must be case-correct too.
>>>eg CONSTRAINT "mycon" FOREIGN KEY ("custID") REFERENCES .....
>>>
>>>
>>>Ron St.Pierre wrote:
>>>
>>>
>>>
>>>>Check your ascii file and make sure that the column or constraint
>>>>names are not quoted. If so, check for spaces at the end of names
>>>>eg  CONSTRAINT 'mycon' FOREIGN KEY ('custid ') REFERENCES ...... <-
>>>>note the space inside quotes after custid
>>>>
>>>>Also, if names are double quoted "custID" any reference to them get
>>>>converted to lower case, so you're references must be case-correct
>>>>
>>>>
>too.
>
>
>>>>eg CONSTRAINT "mycon" FOREIGN KEY ("custID") REFERENCES .....
>>>>
>>>>
>>>>
>>>>Ken Godee wrote:
>>>>
>>>>
>>>>
>>>>>The data's not originally coming from a postgres database,
>>>>>it's in a ascii delimited format, sorry for not mentioning that.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>Ken Godee wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>>table 1: customer.custid primary key
>>>>>>>table 2: transx.custid foreign key
>>>>>>>
>>>>>>>When using COPY to import data I received;
>>>>>>>
>>>>>>>"<unnamed>referential integerity violation-key referenced from
>>>>>>>table transx not found in customer" (fairly vague statement)
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>---------------------------(end of
>>>>>
>>>>>
>broadcast)---------------------------
>
>
>>>>>TIP 3: if posting/reading through Usenet, please send an appropriate
>>>>>subscribe-nomail command to majordomo@postgresql.org so that your
>>>>>message can get through to the mailing list cleanly
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>--
>>>Ron St.Pierre
>>>Syscor R&D
>>>tel: 250-361-1681
>>>email: rstpierre@syscor.com
>>>
>>>
>>>
>>>
>>>
>>
>>------------------------------
>>
>>Date: 13 Dec 2002 23:53:21 +0100
>>From: Moritz Sinn <moritz@freesources.org>
>>To: pgsql-general@postgresql.org
>>Subject: getting datatype of array elements
>>Message-ID: <87y96tmsym.fsf@appserv.sinnhq.de>
>>
>>
>>hi,
>>
>>with the following statement, i get the datatype of a column with help of
>>its name and the tableoid:
>>
>>SELECT typname FROM pg_attribute LEFT JOIN pg_type ON atttypid=typelem
>>WHERE attrelid=18210 AND attname='email';
>>
>>but if the columns datatyp is an array, that doesn't work.
>>i'd like to get the basic datatype (the datatype of the array elements)
>>
>>
>then.
>
>
>>any ideas?
>>
>>thanks,
>> moritz
>>
>>--
>>
>>------------------------------
>>
>>Date: Fri, 13 Dec 2002 18:49:59 -0800
>>From: Joe Conway <mail@joeconway.com>
>>To: Moritz Sinn <moritz@freesources.org>
>>Cc: pgsql-general@postgresql.org
>>Subject: Re: getting datatype of array elements
>>Message-ID: <3DFA9C57.3060904@joeconway.com>
>>
>>Moritz Sinn wrote:
>>
>>
>>>hi,
>>>
>>>with the following statement, i get the datatype of a column with help
>>>
>>>
>of
>
>
>>>its name and the tableoid:
>>>
>>>SELECT typname FROM pg_attribute LEFT JOIN pg_type ON atttypid=typelem
>>>WHERE attrelid=18210 AND attname='email';
>>>
>>>but if the columns datatyp is an array, that doesn't work.
>>>i'd like to get the basic datatype (the datatype of the array elements)
>>>
>>>
>then.
>
>
>>See typelem:
>>
>>
>>
>http://www.us.postgresql.org/users-lounge/docs/7.3/postgres/catalog-pg-type.
>html
>
>
>>HTH,
>>
>>Joe
>>
>>
>>
>>------------------------------
>>
>>End of [pgsql-general] Daily Digest V1 #2863
>>**********
>>
>>
>>
>>
>>
>
>
>---
>Outgoing mail is certified Virus Free.
>Checked by AVG anti-virus system (http://www.grisoft.com).
>Version: 6.0.427 / Virus Database: 240 - Release Date: 2002.12.06.
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 6: Have you searched our list archives?
>
>http://archives.postgresql.org
>
>




В списке pgsql-general по дате отправления:

Предыдущее
От: Medi Montaseri
Дата:
Сообщение: Re: Where are my tables physically in the fs?
Следующее
От: Medi Montaseri
Дата:
Сообщение: Re: [HACKERS] PostgreSQL Global Development Group