Обсуждение: Re: 8K Limit, whats the best strategy?
Jan Wieck wrote: > Poul L. Christiansen wrote: > > I've just split the text up (in 8 pieces), so it fits into 8K rows. But thats > > only a viable solution if your text is less than a couple of 100K's. > > > > You could try to be a daredevil and use the Toast code, even if it's beta. But > > I don't know how far the Toast project has come. > > TOAST is finished and will be shipped with 7.1. It's not a > solution for huge items, but medium sized text up to some > hundred K works fine. What do you mean by "..not a solution for huge items"? Does TOAST have a size limit? > > > Jan > > > > > Keith Wong wrote: > > > > > Hi everyone, > > > > > > Just wondering what strategies people have used to get around the 8K row > > > limit in Postgres. > > > If anyone has been troubled by this limitation before and has a nice > > > solution around it, I would love to hear it. > > > > > > My application is a web-based system in which needs to store large amounts > > > of text. > > > The text stored needs to searchable as well. > > > > > > Cheers, > > > Keith. > > > > -- > > #======================================================================# > # It's easier to get forgiveness for being wrong than for being right. # > # Let's break this rule - forgive me. # > #================================================== JanWieck@Yahoo.com #
Poul L. Christiansen wrote: > Jan Wieck wrote: > > > Poul L. Christiansen wrote: > > > I've just split the text up (in 8 pieces), so it fits into 8K rows. But thats > > > only a viable solution if your text is less than a couple of 100K's. > > > > > > You could try to be a daredevil and use the Toast code, even if it's beta. But > > > I don't know how far the Toast project has come. > > > > TOAST is finished and will be shipped with 7.1. It's not a > > solution for huge items, but medium sized text up to some > > hundred K works fine. > > What do you mean by "..not a solution for huge items"? Does TOAST have a size limit? > Not an explicit one. But imagine you really want to store an MP3 of - let's say 9M in the database. 1. Your client application must quote it somehow to put it into an INSERT querystring. The quoting makes ita 10MB thing (think positive). 2. The query is sent to the backend. Now you have this string a second time in memory. 3. The query is parsed and a 10MB text datum is built for the querytrees Const node. 4. The query is executed, builds a 10MB tuple to store. 5. Toast jumps in and moves it out of the tuple again. So if the client and DB are running on the same system, you'll have the MP3 four times in memory. Now do it concurrent with 20 backends and you'll need 800+ ... you see why I said it's "not a solution for huge"? Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #================================================== JanWieck@Yahoo.com #