Re: How to best use 32 15k.7 300GB drives?

Поиск
Список
Период
Сортировка
От Robert Schnabel
Тема Re: How to best use 32 15k.7 300GB drives?
Дата
Msg-id 4D42F127.9050307@missouri.edu
обсуждение исходный текст
Ответ на Re: How to best use 32 15k.7 300GB drives?  (Stephen Frost <sfrost@snowman.net>)
Ответы Re: How to best use 32 15k.7 300GB drives?
Re: How to best use 32 15k.7 300GB drives?
Список pgsql-performance
<br /> On 1/28/2011 7:14 AM, Stephen Frost wrote: <blockquote cite="mid:20110128131410.GC30352@tamriel.snowman.net"
type="cite"><prewrap="">Robert,
 

* Robert Schnabel (<a class="moz-txt-link-abbreviated" href="mailto:schnabelr@missouri.edu">schnabelr@missouri.edu</a>)
wrote:
</pre><blockquote type="cite"><pre wrap="">Once the bulk data is inserted into the tables I generally
do some updates on columns to set values which characterize the
data.  
</pre></blockquote><pre wrap="">
Please tell me you're not running actual full-table UPDATE statements...
You would be *much* better off either:
a) munging the data on the way in (if possible/reasonable)
b) loading the data into temp tables first, and then using INSERT  statements to move the data into the 'final' tables
WITHthe new  columns/info you want
 
c) considering if you can normalize the data into multiple tables and/or  to cut down the columns to only what you need
asyou go through the  above, too
 

A full-table UPDATE means you're basically making the table twice as big
as it needs to be, and it'll never get smaller..
</pre></blockquote> Depends on what you mean by that.  The tables that I'm concerned with look something like bigint
x2,char var x13, int x24, real x8, smallint x4 by about 65M rows, each.  I only do the updates on one table at a time. 
Thereal columns are actually null in the input csv file.  I run an update which basically uses some of the integer
columnsand calculates frequencies which go into the real columns.  Ditto with some of the other columns.  I don't do
thisbefore I upload the data because 1) it's easier this way and 2) I can't because some of the updates involve joins
toother tables to grab info that I can't do outside the database.  So yes, once the upload is done I run queries that
updateevery row for certain columns, not every column.  After I'm done with a table I run a VACUUM ANALYZE.  I'm really
notworried about what my table looks like on disk.  I actually take other steps also to avoid what you're talking
about.<br/><br /><blockquote cite="mid:20110128131410.GC30352@tamriel.snowman.net" type="cite"><pre
wrap=""></pre><blockquotetype="cite"><pre wrap="">These columns then get indexed.  Basically once the initial
 
manipulation is done the table is then static and what I'm looking
for is query speed.
</pre></blockquote><pre wrap="">
Sadly, this is the same type of DW needs that I've got (though with
telecomm data and phone calls, not genetic stuffs ;), and PG ends up
being limited by the fact that it can only use one core/thread to go
through the data with.

You might consider investing some time trying to figure out how to
parallelize your queries.  My approach to this has been to partition the
data (probably something you're doing already) into multiple tables and
then have shell/perl scripts which will run a given query against all of
the tables, dumping the results of that aggregation/analysis into other
tables, and then having a final 'merge' query.
</pre></blockquote> Thanks for the advise but parallelizing/automating doesn't really do anything for me.  The data is
alreadypartitioned.  Think of it this way, you just got 65M new records with about 30 data points per record on an
individualsample.  You put it in a new table of it's own and now you want to characterize those 65M data points.  The
firstupdate flags about 60M of the rows as uninteresting so you move them to their own *uninteresting* table and
basicallynever really touch them again (but you cant get rid of them).  Now you're working with 5M that you're going to
characterizeinto about 20 categories based on what is in those 30 columns of data.  Do all the querying/updating then
indexand you're done.  Too long to describe but I cannot automate this.  I only update one partition at a time and only
aboutevery couple weeks or so.<br /><br /><br /><blockquote cite="mid:20110128131410.GC30352@tamriel.snowman.net"
type="cite"><prewrap=""></pre><blockquote type="cite"><pre wrap="">The data is sorted by snp_number, sample_id.  So if
Iwant the data
 
for a given sample_id it would be a block of ~58k rows.  The size of
the table depends on how many sample_id's there are.  My largest has
~30k sample_id by 58k snp_number per sample.  The other big table
(with children) is "mutations" and is set up similarly so that I can
access individual tables (samples) based on constraints.  Each of
these children have between 5-60M records.
</pre></blockquote><pre wrap="">
Understand that indexes are only going to be used/useful, typically, if
the amount of records being returned is small relative to the size of
the table (eg: 5%).
</pre></blockquote> Yep, I understand that.  Even though they occupy a lot of space, I keep them around because there
aretimes when I need them.<br /><br /><br /><blockquote cite="mid:20110128131410.GC30352@tamriel.snowman.net"
type="cite"><prewrap="">
 
</pre><blockquote type="cite"><pre wrap="">This is all direct attach storage via SAS2 so I'm guessing it's
probably limited to the single port link between the controller and
the expander.  Again, geneticist here not computer scientist. ;-)
</pre></blockquote><pre wrap="">
That link certainly isn't going to help things..  You might consider how
or if you can improve that.
</pre></blockquote> Suggestions???  It was previously suggested to split the drives on each array across the two
controllerports rather than have all the data drives on one port which makes sense.  Maybe I'm getting my terminology
wronghere but I'm talking about a single SFF-8088 link to each 16 drive enclosure.  What about two controllers, one for
eachenclosure?  Don't know if I have enough empty slots though.<br /><br /><blockquote
cite="mid:20110128131410.GC30352@tamriel.snowman.net"type="cite"><pre wrap=""></pre><blockquote type="cite"><pre
wrap="">Allof the data could be reloaded.  Basically, once I get the data
 
into the database and I'm done manipulating it I create a backup
copy/dump which then gets stored at a couple different locations.
</pre></blockquote><pre wrap="">
You might consider turning fsync off while you're doing these massive
data loads..  and make sure that you issue your 'CREATE TABLE' and your
'COPY' statements in the same transaction, and again, I suggest loading
into temporary (CREATE TEMPORARY TABLE) tables first, then doing the
CREATE TABLE/INSERT statement for the 'real' table.  Make sure that you
create *both* your constraints *and* your indexes *after* the table is
populated.

If you turn fsync off, make sure you turn it back on. :)

</pre></blockquote> I haven't messed with fsync but maybe I'll try.  In general, I create my indexes and constraints
afterI'm done doing all the updating I need to do.  I made the mistake <b>*once*</b> of copying millions of rows into a
tablethat already had indexes.<br /><br /><blockquote cite="mid:20110128131410.GC30352@tamriel.snowman.net"
type="cite"><prewrap=""></pre><blockquote type="cite"><pre wrap="">My goal is to 1) have a fairly robust system so that
Idon't have to
 
spend my time rebuilding things and 2) be able to query the data
quickly.  Most of what I do are ad hoc queries.  I have an idea...
"how many X have Y in this set of Z samples" and write the query to
get the answer.  I can wait a couple minutes to get an answer but
waiting an hour is becoming tiresome.
</pre></blockquote><pre wrap="">
Have you done any analysis to see what the bottleneck actually is?  When
you run top, is your PG process constantly in 'D' state, or is it in 'R'
state, or what?  Might help figure some of that out.  Note that
parallelizing the query will help regardless of if it's disk bound or
CPU bound, when you're running on the kind of hardware you're talking
about (lots of spindles, multiple CPUs, etc).
Thanks,
    Stephen
</pre></blockquote> It got lost from the original post but my database (9.0.0) is currently on my Windows XP 64-bit
workstationin my office on a 16 drive Seagate 15k.5 RAID5, no comments needed, I know, I'm moving it :-).  I'm moving
itto my server which is Windows Ent Server 2008 R2 64-bit 8 AMD cores & 32G ram and these new drives/controller. So
notop or lvm although I do keep an eye on things with Process Explorer.  Also, I don't have any single query that is a
problem. I have my canned queries which I run manually to update/manipulate/move data around every couple weeks when I
geta new chunk of data.  Other than that my queries are all ad hoc.  I'm just trying to get opinions on the best way to
setup these drives/controllers/enclosures for basically large sequential reads that quite often use indexes.<br /><br
/>So far I'd summarize the consensus as:<br /> 1) putting WAL on a separate array is worthless since I do very little
writes. What about if I put my temp tablespace on the same array with WAL & xlog?  I've noticed a lot of the ad hoc
queriesI run create tmp files, sometimes tens of GB.  I appreciate the fact that managing multiple tablespaces is not
aseasy as managing one but if it helps...<br /><br /> 2) Indexes on a separate array may not be all that useful since
I'mnot doing simultaneous reads/writes.<br /><br /> 3) Since I can very easily recreate the database in case of
crash/corruptionRAID10 may not be the best option.  However, if I do go with RAID10 split the drives between the two
enclosures(this assumes data & index arrays).  I've thought about RAID0 but quite frankly I really don't like
havingto rebuild things.  At some point my time becomes valuable.  RAID6 was suggested but rebuilding a 9TB RAID6 seems
scaryslow to me.<br /><br /> I appreciate the comments thus far.<br /> Bob<br /><br /><br /><br /><br /><br /><br /> 

В списке pgsql-performance по дате отправления:

Предыдущее
От: Shaun Thomas
Дата:
Сообщение: Re: FW: Queries becoming slow under heavy load
Следующее
От: Scott Marlowe
Дата:
Сообщение: Re: How to best use 32 15k.7 300GB drives?