Re: Data grid: fetching/scrolling data on user demand

Поиск
Список
Период
Сортировка
От Dave Page
Тема Re: Data grid: fetching/scrolling data on user demand
Дата
Msg-id CA+OCxowg8snDKB2sCKkXk3K=-NrgJcyUu2ctNDYV9TfDpenmtw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Data grid: fetching/scrolling data on user demand  ("Tomek" <tomek@apostata.org>)
Ответы Re: Data grid: fetching/scrolling data on user demand  ("Tomek" <tomek@apostata.org>)
Список pgadmin-support


On Tue, Oct 17, 2017 at 1:44 PM, Tomek <tomek@apostata.org> wrote:
Hi,

>>>> It is not exactly truth... In v3 the query is executed, fetched and all rows are displayed,
>>>
>>> No they're not, though they are all transferred to the client which is why it's slower.
>>
>> They are not what?
>
> The handling of rows in pgAdmin 3 is not as you described.

So please tell me how is it done.


I don't have the spare cycles to start explaining how pgAdmin 3 worked internally.
 

>> What is slower - is the "display" part in both versions. You have data from server and than You
>> push it to display.
>> I've done quick test - table 650000 rows / 45 columns, query SELECT * from table limit 100000.
>> With default ON_DEMAND_RECORD_COUNT around 5 seconds, with ON_DEMAND_RECORD_COUNT = 100000 25
>> seconds...
>> It is 20 seconds spent only on displaying...
>
> So? No human can read that quickly.

I get the joke but it doesn't add anything to this discussion...

It wasn't a joke.
 

>>>> For me this idea of "load on demand" (which in reality is "display on demand") is pointless. It
>>> is done only because the main lag of v4 comes from interface. I don't see any other purpose for
>>> it... If You know (and You do) that v4 can't handle > big results add pagination like every other
>>> webapp...
>>>
>>> We did that in the first beta, and users overwhelmingly said they didn't like or want pagination.
>>>
>>> What we have now gives users the interface they want, and presents the data to them quickly - far
>>> more quickly than pgAdmin 3 ever did when working with larger resultsets.
>>>
>>> If that's pointless for you, then that's fine, but other users appreciate the speed and
>>> responsiveness.

Part of the data...
Please explain to me what is the point o requesting 100000 and getting 1000 without possibility to access the rest?

Of course you can get all 100000, you just scroll down. Saying that you can only get 1000 "without possibility to access the rest" is 100% factually incorrect. 


>> I don't know of any users (we are the users) who are happy that selecting 10000 rows requires
>> dragging scrollbar five times to see 5001 record...
>
>> Saying pointless I meant that if I want 10000 rows I should get 10000 rows, if I want to limit my
>> data I'll use LIMIT. But if the ui can't handle big results just give me easiest/fastest way to get
>> to may data.
>
> Then increase ON_DEMAND_RECORD_COUNT to a higher value if that suits the way you work. Very few
> people scroll as you suggest - if you know you want to see the 5001 record, it's common to use
> limit/offset. If you don't know, then you're almost certainly going to scroll page by page or
> similar, reading the results as you go - in which case, the batch loading will speed things up for
> you as you'll have a much quicker "time to view first page".

You can read yes? I wrote about LIMIT earlier... I'll put it in simpler terms "If you don't know" what I mean.

You understand that if a user selects data, user wants to get data? - that is why he is doing select.
You understand that to get the data is to see the data? - that is why he is doing select.
You understand that data can take more than 1000 records?
You understand that You hide the data without possibility to access it (at least quickly)?
You understand that in this example user didn't get the data?

I hope that above clarifies pointlessness of "display on demand" without some other option to speed up the browsing process.

As for increasing ON_DEMAND_RECORD_COUNT - I would gladly do it but then displaying 100000 records takes more time and about 500 Mb of memory (v3 used around 80)... What is more in v4 I can't get 300000 records even with default ON_DEMAND_RECORD_COUNT (canceled query after 5 minutes) while v3 managed to do it in 80 seconds.

I understand that large results are rare but this is a management tool not some office app... It should do what I want...

And again You are telling me what I want and what I need, and how I should do it... You decided to make this tool in a way that no other db management works,

No - I'm telling you what nearly 20 years of interacting with pgAdmin users has indicated they likely want. If it's not what you want, then you are free to look at other tools. We do our best to meet the requirements of the majority of our users, but it's obviously impossible to meet everyones requirements.
 
You cut out a lot of v3 features,

Yes. Did you ever create an operator family? Or use the graphical query designer that crashed all the time? Those and many other features were intentionally removed. Some have been put back in cases where it became clear they were needed and others will still be added in the future - but the vast majority of things we left out will likely remain gone because they add complexity and maintenance costs for zero or near zero value.
 
You purposely limit usability to account for poor performance and call it a feature...

You haven't (as far as I can see) described how we limited usability, except by making claims that are provably wrong. 

I'm not going to spend any more of my free time on this thread. I have little enough as it is.

В списке pgadmin-support по дате отправления:

Предыдущее
От: Johann Spies
Дата:
Сообщение: Re: Installation failure on Debian
Следующее
От: "Tomek"
Дата:
Сообщение: Re: Data grid: fetching/scrolling data on user demand