Обсуждение: Data grid: fetching/scrolling data on user demand

Поиск
Список
Период
Сортировка

Data grid: fetching/scrolling data on user demand

От
legrand legrand
Дата:
Hello,

Data grid is populated without any limit by default,
it could be a problem with very big datasets ...

To avoid this, it is possible to limit the number of rows retrieved,
but that limit is fixed, even if user tryes to scroll more data ...

Is scrolling data on demand supported ?

Thanks in advance for your answer.
Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-pgadmin-support-f2191615.html


Re: Data grid: fetching/scrolling data on user demand

От
Dave Page
Дата:


On Sat, Oct 14, 2017 at 8:24 PM, legrand legrand <legrand_legrand@hotmail.com> wrote:
Hello,

Data grid is populated without any limit by default,
it could be a problem with very big datasets ...

To avoid this, it is possible to limit the number of rows retrieved,
but that limit is fixed, even if user tryes to scroll more data ...

Is scrolling data on demand supported ?

pgAdmin 4 loads data on demand (i.e. as the user scrolls) already. That's why it's significantly faster than pgAdmin 3. The batch size can be changed (from the default 1000) in the application config file by overriding the ON_DEMAND_RECORD_COUNT option in web/config_local.py in your installation.
 
--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: Data grid: fetching/scrolling data on user demand

От
legrand legrand
Дата:
Sorry, why is

*select * from information_schema.columns a,information_schema.columns b
*
on a newly created db is never ending ?

when 
*select * from information_schema.columns a,information_schema.columns b
limit 1000*
takes less than one second ?

Is pgadmin4 really fetching only the 1000 first records ?

PAscal
SQLeo project manager



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-pgadmin-support-f2191615.html


Re: Data grid: fetching/scrolling data on user demand

От
legrand legrand
Дата:

Re: Data grid: fetching/scrolling data on user demand

От
Murtuza Zabuawala
Дата:

On Tue, Oct 17, 2017 at 12:40 AM, legrand legrand <legrand_legrand@hotmail.com> wrote:
maybe this behavior is related to fetching records using a server-side cursor
?

https://wiki.postgresql.org/wiki/Using_psycopg2_with_PostgreSQL#Fetch_Records_using_a_Server-Side_Cursor
​No we are not using server side cursor.
pgAdmin4 execute queries in async mode and poll the result as it gets available on connection file descriptor, so the delay you are facing on cross join is from psycopg2 driver itself, pgAdmin4 will start polling result data in batch (1000 records by default) incrementally as soon as they gets available on 
connection file descriptor
​.​


I met the same problem using pgjdbc with Oracle SQL developer as descibed
here
https://stackoverflow.com/questions/46066882/autocommit-always-turns-on-in-oracle-sqldeveloper-when-connecting-to-postgresql
or
https://github.com/pgjdbc/pgjdbc/issues/976



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-pgadmin-support-f2191615.html


Re: Data grid: fetching/scrolling data on user demand

От
legrand legrand
Дата:
How long does it take in your environnment
to fetch the 1000 first records from

select * from information_schema.columns a,information_schema.columns b 





--
Sent from: http://www.postgresql-archive.org/PostgreSQL-pgadmin-support-f2191615.html


Re: Data grid: fetching/scrolling data on user demand

От
Murtuza Zabuawala
Дата:

On Tue, Oct 17, 2017 at 2:22 AM, legrand legrand <legrand_legrand@hotmail.com> wrote:
How long does it take in your environnment
to fetch the 1000 first records from

select * from information_schema.columns a,information_schema.columns b
​I didn't run it because on my environment just count from information_schema.columns gave me 7325 records (Cross join will be 7
325
​ * 7
325
​ records , and obviously that will take huge amount of time
)




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-pgadmin-support-f2191615.html


Re: Data grid: fetching/scrolling data on user demand

От
Dave Page
Дата:


On Tue, Oct 17, 2017 at 6:36 AM, Murtuza Zabuawala <murtuza.zabuawala@enterprisedb.com> wrote:

On Tue, Oct 17, 2017 at 2:22 AM, legrand legrand <legrand_legrand@hotmail.com> wrote:
How long does it take in your environnment
to fetch the 1000 first records from

select * from information_schema.columns a,information_schema.columns b
​I didn't run it because on my environment just count from information_schema.columns gave me 7325 records (Cross join will be 7
325
​ * 7
325
​ records , and obviously that will take huge amount of time
)

Right - exactly as it does in psql (the command line interface). pgAdmin can't make the database engine faster - all it can do is retrieve and display the results as efficiently as possible, once the server makes them available.


--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: Data grid: fetching/scrolling data on user demand

От
legrand legrand
Дата:
Pgadmin doesn't have to Wait for all the data,
As he should only load/fetch the first 1000 rows.

Loading all the data in memory will not be possible for big datasets.

This is a design error at my point of view.

PAscal
SQLeo projection manager





--
Sent from: http://www.postgresql-archive.org/PostgreSQL-pgadmin-support-f2191615.html


Re: Data grid: fetching/scrolling data on user demand

От
Dave Page
Дата:


On Tue, Oct 17, 2017 at 9:36 AM, legrand legrand <legrand_legrand@hotmail.com> wrote:
Pgadmin doesn't have to Wait for all the data,
As he should only load/fetch the first 1000 rows.

Loading all the data in memory will not be possible for big datasets.

This is a design error at my point of view.

pgAdmin doesn't wait for all the data. It does load/fetch only the first 1000 rows.

However, it cannot do anything until Postgres makes data available.

--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: Data grid: fetching/scrolling data on user demand

От
legrand legrand
Дата:
1000 first rows are available in less than zone second.
See query with limit 1000.

Monitoring memory usage or PG_stat_activity shows that all the data is
fetched.




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-pgadmin-support-f2191615.html


Re: Data grid: fetching/scrolling data on user demand

От
Dave Page
Дата:


On Tue, Oct 17, 2017 at 9:51 AM, legrand legrand <legrand_legrand@hotmail.com> wrote:
1000 first rows are available in less than zone second.
See query with limit 1000.

We cannot add arbitrary limit/offsets to users queries. They will affect timing for those who are trying to tune queries, and may well affect other behaviours. Not to mention the fact that the user might be running a complex script that we would have to parse client-side to try to figure out where to add the limit/offset (and likely get it wrong).
 
Monitoring memory usage or PG_stat_activity shows that all the data is
fetched.

Sure - pg_stat_activity (it's all lower case) will show the connection state as idle. But the point is - until it does, the query is still being processed and we don't get the data.

--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: Data grid: fetching/scrolling data on user demand

От
Murtuza Zabuawala
Дата:
As I mentioned in my previous email that we do not use server side cursor, so it won't add any limit on query. 
The delay is from database driver itself, it has nothing to do with pgAdmin4.

Try executing the same query in 'psql', 'pgAdmin3' and third party tool which use libpq library as backend, you will observe the same behaviour.

On Tue, Oct 17, 2017 at 2:21 PM, legrand legrand <legrand_legrand@hotmail.com> wrote:
1000 first rows are available in less than zone second.
See query with limit 1000.

Monitoring memory usage or PG_stat_activity shows that all the data is
fetched.




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-pgadmin-support-f2191615.html


Re: Data grid: fetching/scrolling data on user demand

От
"Tomek"
Дата:
Hi,

> As I mentioned in my previous email that we do not use server side cursor, so it won't add any
> limit on query.
>
> The delay is from database driver itself, it has nothing to do with pgAdmin4.
> Try executing the same query in 'psql', 'pgAdmin3' and third party tool which use libpq library as
> backend, you will observe the same behaviour.

It is not exactly truth... In v3 the query is executed, fetched and all rows are displayed, in v4 query is executed,
fetchedbut only 1000 records are displayed. 

For me this idea of "load on demand" (which in reality is "display on demand") is pointless. It is done only because
themain lag of v4 comes from interface. I don't see any other purpose for it... If You know (and You do) that v4 can't
handlebig results add pagination like every other webapp... 

And by the way You have a big leak in query tool - execute query with several thousands rows, scroll above 1000 mark
fewtimes, execute the same query, scroll above 1000 mark few times - repeat until You run out of memory or v4
crashes...

--
Tomek


Re: Data grid: fetching/scrolling data on user demand

От
Dave Page
Дата:


On Tue, Oct 17, 2017 at 10:51 AM, Tomek <tomek@apostata.org> wrote:
Hi,

> As I mentioned in my previous email that we do not use server side cursor, so it won't add any
> limit on query.
>
> The delay is from database driver itself, it has nothing to do with pgAdmin4.
> Try executing the same query in 'psql', 'pgAdmin3' and third party tool which use libpq library as
> backend, you will observe the same behaviour.

It is not exactly truth... In v3 the query is executed, fetched and all rows are displayed, 

No they're not, though they are all transferred to the client which is why it's slower.
 
> For me this idea of "load on demand" (which in reality is "display on demand") is pointless. It is done only because the main lag of v4 comes from interface. I don't see any other purpose for it... If You know (and You do) that v4 can't handle > big results add pagination like every other webapp...

We did that in the first beta, and users overwhelmingly said they didn't like or want pagination.

What we have now gives users the interface they want, and presents the data to them quickly - far more quickly than pgAdmin 3 ever did when working with larger resultsets.

If that's pointless for you, then that's fine, but other users appreciate the speed and responsiveness.

--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: Data grid: fetching/scrolling data on user demand

От
Murtuza Zabuawala
Дата:
Hi Tomek,

On Tue, Oct 17, 2017 at 3:21 PM, Tomek <tomek@apostata.org> wrote:
Hi,

> As I mentioned in my previous email that we do not use server side cursor, so it won't add any
> limit on query.
>
> The delay is from database driver itself, it has nothing to do with pgAdmin4.
> Try executing the same query in 'psql', 'pgAdmin3' and third party tool which use libpq library as
> backend, you will observe the same behaviour.

It is not exactly truth... In v3 the query is executed, fetched and all rows are displayed, in v4 query is executed, fetched but only 1000 records are displayed.
My answer was not in regards of fetching & displaying the data on UI but it was about the execution time of a query.​
​My point was unless we get the result from database driver itself, we can not fetch it.​


For me this idea of "load on demand" (which in reality is "display on demand") is pointless. It is done only because the main lag of v4 comes from interface. I don't see any other purpose for it... If You know (and You do) that v4 can't handle big results add pagination like every other webapp...

And by the way You have a big leak in query tool - execute query with several thousands rows, scroll above 1000 mark few times, execute the same query, scroll above 1000 mark few times - repeat until You run out of memory or v4 crashes...
​Thanks for reporting, I'll look into this.​
 

--
Tomek


Re: Data grid: fetching/scrolling data on user demand

От
"Tomek"
Дата:
Hi,

>> It is not exactly truth... In v3 the query is executed, fetched and all rows are displayed,
>
> No they're not, though they are all transferred to the client which is why it's slower.

They are not what? What is slower - is the "display" part in both versions. You have data from server and than You push
itto display. 
I've done quick test - table 650000 rows / 45 columns, query SELECT * from table limit 100000.
With default ON_DEMAND_RECORD_COUNT around 5 seconds, with ON_DEMAND_RECORD_COUNT = 100000 25 seconds...
It is 20 seconds spent only on displaying...

>> For me this idea of "load on demand" (which in reality is "display on demand") is pointless. It
> is done only because the main lag of v4 comes from interface. I don't see any other purpose for
> it... If You know (and You do) that v4 can't handle > big results add pagination like every other
> webapp...
>
> We did that in the first beta, and users overwhelmingly said they didn't like or want pagination.
>
> What we have now gives users the interface they want, and presents the data to them quickly - far
> more quickly than pgAdmin 3 ever did when working with larger resultsets.
>
> If that's pointless for you, then that's fine, but other users appreciate the speed and
> responsiveness.

I don't know of any users (we are the users) who are happy that selecting 10000 rows requires dragging scrollbar five
timesto see 5001 record... 

Saying pointless I meant that if I want 10000 rows I should get 10000 rows, if I want to limit my data I'll use LIMIT.
Butif the ui can't handle big results just give me easiest/fastest way to get to may data. 

--
Tomek


Re: Data grid: fetching/scrolling data on user demand

От
Dave Page
Дата:


On Tue, Oct 17, 2017 at 11:35 AM, Tomek <tomek@apostata.org> wrote:
Hi,

>> It is not exactly truth... In v3 the query is executed, fetched and all rows are displayed,
>
> No they're not, though they are all transferred to the client which is why it's slower.

They are not what?

The handling of rows in pgAdmin 3 is not as you described.
 
What is slower - is the "display" part in both versions. You have data from server and than You push it to display.
I've done quick test - table 650000 rows / 45 columns, query SELECT * from table limit 100000.
With default ON_DEMAND_RECORD_COUNT around 5 seconds, with ON_DEMAND_RECORD_COUNT = 100000 25 seconds...
It is 20 seconds spent only on displaying...

So? No human can read that quickly.
 

>> For me this idea of "load on demand" (which in reality is "display on demand") is pointless. It
> is done only because the main lag of v4 comes from interface. I don't see any other purpose for
> it... If You know (and You do) that v4 can't handle > big results add pagination like every other
> webapp...
>
> We did that in the first beta, and users overwhelmingly said they didn't like or want pagination.
>
> What we have now gives users the interface they want, and presents the data to them quickly - far
> more quickly than pgAdmin 3 ever did when working with larger resultsets.
>
> If that's pointless for you, then that's fine, but other users appreciate the speed and
> responsiveness.

I don't know of any users (we are the users) who are happy that selecting 10000 rows requires dragging scrollbar five times to see 5001 record...

Saying pointless I meant that if I want 10000 rows I should get 10000 rows, if I want to limit my data I'll use LIMIT. But if the ui can't handle big results just give me easiest/fastest way to get to may data.

Then increase ON_DEMAND_RECORD_COUNT to a higher value if that suits the way you work. Very few people scroll as you suggest - if you know you want to see the 5001 record, it's common to use limit/offset. If you don't know, then you're almost certainly going to scroll page by page or similar, reading the results as you go - in which case, the batch loading will speed things up for you as you'll have a much quicker "time to view first page".


--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: Data grid: fetching/scrolling data on user demand

От
"Tomek"
Дата:
Hi,

>>>> It is not exactly truth... In v3 the query is executed, fetched and all rows are displayed,
>>>
>>> No they're not, though they are all transferred to the client which is why it's slower.
>>
>> They are not what?
>
> The handling of rows in pgAdmin 3 is not as you described.

So please tell me how is it done.

>> What is slower - is the "display" part in both versions. You have data from server and than You
>> push it to display.
>> I've done quick test - table 650000 rows / 45 columns, query SELECT * from table limit 100000.
>> With default ON_DEMAND_RECORD_COUNT around 5 seconds, with ON_DEMAND_RECORD_COUNT = 100000 25
>> seconds...
>> It is 20 seconds spent only on displaying...
>
> So? No human can read that quickly.

I get the joke but it doesn't add anything to this discussion...

>>>> For me this idea of "load on demand" (which in reality is "display on demand") is pointless. It
>>> is done only because the main lag of v4 comes from interface. I don't see any other purpose for
>>> it... If You know (and You do) that v4 can't handle > big results add pagination like every other
>>> webapp...
>>>
>>> We did that in the first beta, and users overwhelmingly said they didn't like or want pagination.
>>>
>>> What we have now gives users the interface they want, and presents the data to them quickly - far
>>> more quickly than pgAdmin 3 ever did when working with larger resultsets.
>>>
>>> If that's pointless for you, then that's fine, but other users appreciate the speed and
>>> responsiveness.

Part of the data...
Please explain to me what is the point o requesting 100000 and getting 1000 without possibility to access the rest?

>> I don't know of any users (we are the users) who are happy that selecting 10000 rows requires
>> dragging scrollbar five times to see 5001 record...
>
>> Saying pointless I meant that if I want 10000 rows I should get 10000 rows, if I want to limit my
>> data I'll use LIMIT. But if the ui can't handle big results just give me easiest/fastest way to get
>> to may data.
>
> Then increase ON_DEMAND_RECORD_COUNT to a higher value if that suits the way you work. Very few
> people scroll as you suggest - if you know you want to see the 5001 record, it's common to use
> limit/offset. If you don't know, then you're almost certainly going to scroll page by page or
> similar, reading the results as you go - in which case, the batch loading will speed things up for
> you as you'll have a much quicker "time to view first page".

You can read yes? I wrote about LIMIT earlier... I'll put it in simpler terms "If you don't know" what I mean.

You understand that if a user selects data, user wants to get data? - that is why he is doing select.
You understand that to get the data is to see the data? - that is why he is doing select.
You understand that data can take more than 1000 records?
You understand that You hide the data without possibility to access it (at least quickly)?
You understand that in this example user didn't get the data?

I hope that above clarifies pointlessness of "display on demand" without some other option to speed up the browsing
process. 

As for increasing ON_DEMAND_RECORD_COUNT - I would gladly do it but then displaying 100000 records takes more time and
about500 Mb of memory (v3 used around 80)... What is more in v4 I can't get 300000 records even with default
ON_DEMAND_RECORD_COUNT(canceled query after 5 minutes) while v3 managed to do it in 80 seconds. 

I understand that large results are rare but this is a management tool not some office app... It should do what I
want...

And again You are telling me what I want and what I need, and how I should do it... You decided to make this tool in a
waythat no other db management works, You cut out a lot of v3 features, You purposely limit usability to account for
poorperformance and call it a feature... 
It reminds me of some big software company in old days...

--
Tomek


Re: Data grid: fetching/scrolling data on user demand

От
Dave Page
Дата:


On Tue, Oct 17, 2017 at 1:44 PM, Tomek <tomek@apostata.org> wrote:
Hi,

>>>> It is not exactly truth... In v3 the query is executed, fetched and all rows are displayed,
>>>
>>> No they're not, though they are all transferred to the client which is why it's slower.
>>
>> They are not what?
>
> The handling of rows in pgAdmin 3 is not as you described.

So please tell me how is it done.


I don't have the spare cycles to start explaining how pgAdmin 3 worked internally.
 

>> What is slower - is the "display" part in both versions. You have data from server and than You
>> push it to display.
>> I've done quick test - table 650000 rows / 45 columns, query SELECT * from table limit 100000.
>> With default ON_DEMAND_RECORD_COUNT around 5 seconds, with ON_DEMAND_RECORD_COUNT = 100000 25
>> seconds...
>> It is 20 seconds spent only on displaying...
>
> So? No human can read that quickly.

I get the joke but it doesn't add anything to this discussion...

It wasn't a joke.
 

>>>> For me this idea of "load on demand" (which in reality is "display on demand") is pointless. It
>>> is done only because the main lag of v4 comes from interface. I don't see any other purpose for
>>> it... If You know (and You do) that v4 can't handle > big results add pagination like every other
>>> webapp...
>>>
>>> We did that in the first beta, and users overwhelmingly said they didn't like or want pagination.
>>>
>>> What we have now gives users the interface they want, and presents the data to them quickly - far
>>> more quickly than pgAdmin 3 ever did when working with larger resultsets.
>>>
>>> If that's pointless for you, then that's fine, but other users appreciate the speed and
>>> responsiveness.

Part of the data...
Please explain to me what is the point o requesting 100000 and getting 1000 without possibility to access the rest?

Of course you can get all 100000, you just scroll down. Saying that you can only get 1000 "without possibility to access the rest" is 100% factually incorrect. 


>> I don't know of any users (we are the users) who are happy that selecting 10000 rows requires
>> dragging scrollbar five times to see 5001 record...
>
>> Saying pointless I meant that if I want 10000 rows I should get 10000 rows, if I want to limit my
>> data I'll use LIMIT. But if the ui can't handle big results just give me easiest/fastest way to get
>> to may data.
>
> Then increase ON_DEMAND_RECORD_COUNT to a higher value if that suits the way you work. Very few
> people scroll as you suggest - if you know you want to see the 5001 record, it's common to use
> limit/offset. If you don't know, then you're almost certainly going to scroll page by page or
> similar, reading the results as you go - in which case, the batch loading will speed things up for
> you as you'll have a much quicker "time to view first page".

You can read yes? I wrote about LIMIT earlier... I'll put it in simpler terms "If you don't know" what I mean.

You understand that if a user selects data, user wants to get data? - that is why he is doing select.
You understand that to get the data is to see the data? - that is why he is doing select.
You understand that data can take more than 1000 records?
You understand that You hide the data without possibility to access it (at least quickly)?
You understand that in this example user didn't get the data?

I hope that above clarifies pointlessness of "display on demand" without some other option to speed up the browsing process.

As for increasing ON_DEMAND_RECORD_COUNT - I would gladly do it but then displaying 100000 records takes more time and about 500 Mb of memory (v3 used around 80)... What is more in v4 I can't get 300000 records even with default ON_DEMAND_RECORD_COUNT (canceled query after 5 minutes) while v3 managed to do it in 80 seconds.

I understand that large results are rare but this is a management tool not some office app... It should do what I want...

And again You are telling me what I want and what I need, and how I should do it... You decided to make this tool in a way that no other db management works,

No - I'm telling you what nearly 20 years of interacting with pgAdmin users has indicated they likely want. If it's not what you want, then you are free to look at other tools. We do our best to meet the requirements of the majority of our users, but it's obviously impossible to meet everyones requirements.
 
You cut out a lot of v3 features,

Yes. Did you ever create an operator family? Or use the graphical query designer that crashed all the time? Those and many other features were intentionally removed. Some have been put back in cases where it became clear they were needed and others will still be added in the future - but the vast majority of things we left out will likely remain gone because they add complexity and maintenance costs for zero or near zero value.
 
You purposely limit usability to account for poor performance and call it a feature...

You haven't (as far as I can see) described how we limited usability, except by making claims that are provably wrong. 

I'm not going to spend any more of my free time on this thread. I have little enough as it is.

Re: Data grid: fetching/scrolling data on user demand

От
"Tomek"
Дата:
Hi,

> On Tue, Oct 17, 2017 at 1:44 PM, Tomek <tomek@apostata.org> wrote:
>
>> Hi,
>>
>>>>>> It is not exactly truth... In v3 the query is executed, fetched and all rows are displayed,
>>>>>
>>>>> No they're not, though they are all transferred to the client which is why it's slower.
>>>>
>>>> They are not what?
>>>
>>> The handling of rows in pgAdmin 3 is not as you described.
>>
>> So please tell me how is it done.
>
> You can find the code here:
> https://git.postgresql.org/gitweb/?p=pgadmin3.git;a=tree;h=216f52a5135db3124268a32c553b812547d4b918;
> b=216f52a5135db3124268a32c553b812547d4b918
You are using libpq, You fetch data, You display data. I don't know how to describe it simpler...
>> Part of the data...
>> Please explain to me what is the point o requesting 100000 and getting 1000 without possibility to
>> access the rest?
>
> Of course you can get all 100000, you just scroll down. Saying that you can only get 1000 "without
> possibility to access the rest" is 100% factually incorrect.

Right I forgot to add "at least quickly" as I later did.

> No - I'm telling you what nearly 20 years of interacting with pgAdmin users has indicated they
> likely want. If it's not what you want, then you are free to look at other tools. We do our best to
> meet the requirements of the majority of our users, but it's obviously impossible to meet everyones
> requirements.
>
>> You cut out a lot of v3 features,
>
> Yes. Did you ever create an operator family? Or use the graphical query designer that crashed all
> the time? Those and many other features were intentionally removed. Some have been put back in
> cases where it became clear they were needed and others will still be added in the future - but the
> vast majority of things we left out will likely remain gone because they add complexity and
> maintenance costs for zero or near zero value.

This is old - a lot o people told You already what is missing - and not some depreciated stuff but important stuff like
forexample possibility to edit table data without PRIMARY key and OIDS=TRUE... 

>> You purposely limit usability to account for poor performance and call it a feature...
>
> You haven't (as far as I can see) described how we limited usability, except by making claims that
> are provably wrong.

I know - almost every suggestion/problem with v4 in comparison to v3 in this list was dismissed by You, by telling us
Usersthat we do not know what we want and the only one who do is You. 
> I'm not going to spend any more of my free time on this thread. I have little enough as it is.

Of course, because I'm being paid for talking to the wall...

--
Tomek