Обсуждение: [pgadmin-hackers] Feature test issues

Поиск
Список
Период
Сортировка

[pgadmin-hackers] Feature test issues

От
Dave Page
Дата:
Hi Tira, George,

I've just been updating our internal automated test system to run the
feature tests, and ran into a couple of additional issues that need to
be addressed. Can you look into the following please?

- When starting pgAdmin, it's using the default configuration database
(CONFIG['SQLITE_PATH']), however for testing we should be using
CONFIG['TEST_SQLITE_PATH']). This means that it's polluting the user's
default configuration (and if they don't have one, causing an
additional initialisation step).

- With Python 2.6, the following failure is seen when the first
feature test is run:

Traceback (most recent call last):
  File "/var/lib/jenkins/workspace/pgadmin4-master-python26/web/regression/runtests.py",
line 286, in <module>
    verbosity=2).run(suite)
  File
"/var/lib/jenkins/workspace/pgadmin4-master-python26/pgadmin-venv/lib/python2.6/site-packages/unittest2/runner.py",
line 172, in run
    test(result)
  File
"/var/lib/jenkins/workspace/pgadmin4-master-python26/pgadmin-venv/lib/python2.6/site-packages/unittest2/suite.py",
line 87, in __call__
    return self.run(*args, **kwds)
  File
"/var/lib/jenkins/workspace/pgadmin4-master-python26/pgadmin-venv/lib/python2.6/site-packages/unittest2/suite.py",
line 126, in run
    test(result)
  File
"/var/lib/jenkins/workspace/pgadmin4-master-python26/pgadmin-venv/lib/python2.6/site-packages/unittest2/case.py",
line 673, in __call__
    return self.run(*args, **kwds)
  File
"/var/lib/jenkins/workspace/pgadmin4-master-python26/pgadmin-venv/lib/python2.6/site-packages/unittest2/case.py",
line 633, in run
    self._feedErrorsToResult(result, outcome.errors)
  File
"/var/lib/jenkins/workspace/pgadmin4-master-python26/pgadmin-venv/lib/python2.6/site-packages/unittest2/case.py",
line 563, in _feedErrorsToResult
    if issubclass(exc_info[0], self.failureException):
TypeError: issubclass() arg 2 must be a class or tuple of classes

For completeness, other issues outstanding that we've previously discussed:

- pgAdmin processes may remain running after test failures.

- The test suite may hang following a feature test failure, at the end
of the run.

- The screenshot functionality should be fixed (ideally) or removed.

- The tests really need to run with a single instantiation of pgAdmin.
It's clearly going to be far too slow to start/stop pgAdmin for every
test once we start adding more (and moving forward, I really want
feature tests to become the default to ensure we're end-to-end testing
everything). For reference, each test run (currently one version of
Python, against 5 different database servers) is now taking ~5 minutes
vs. 1m47s without the feature tests.

On the plus side, test runs are now green across the board with
feature tests enabled, except for Python 2.6 :-)

Thanks!

--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [pgadmin-hackers] Feature test issues

От
Atira Odhner
Дата:

Cool, we already have a task about proper teardown and a couple other things in our backlog. we'll probably get to it in the next day or so. I'll take a look at the other stuff.

Also, regarding speed, even without the app startup time, end to end tests are always going to be relatively slow. We definitely want to make sure that the time it takes to run the tests does not grow to where it is a deterrent to running them.
There are a variety of things we can do to help address that as our suite grows. For instance,  we could consider parallelizing the tests, making setup and teardown more efficient,  combining related tests, or even breaking the tests into suites and running only some of them locally by default.
Since we only have a couple feature tests so far the speed hasn't really felt like an issue for me yet, but I understand it may be different if you are trying to run in a variety of configurations.

Out of curiosity, what is the goal in supporting multiple python versions? Are we working on moving to 3.x and just haven't gotten fully there yet?

Tira

On Sun, Feb 26, 2017, 3:39 AM Dave Page <dpage@pgadmin.org> wrote:
Hi Tira, George,

I've just been updating our internal automated test system to run the
feature tests, and ran into a couple of additional issues that need to
be addressed. Can you look into the following please?

- When starting pgAdmin, it's using the default configuration database
(CONFIG['SQLITE_PATH']), however for testing we should be using
CONFIG['TEST_SQLITE_PATH']). This means that it's polluting the user's
default configuration (and if they don't have one, causing an
additional initialisation step).

- With Python 2.6, the following failure is seen when the first
feature test is run:

Traceback (most recent call last):
  File "/var/lib/jenkins/workspace/pgadmin4-master-python26/web/regression/runtests.py",
line 286, in <module>
    verbosity=2).run(suite)
  File "/var/lib/jenkins/workspace/pgadmin4-master-python26/pgadmin-venv/lib/python2.6/site-packages/unittest2/runner.py",
line 172, in run
    test(result)
  File "/var/lib/jenkins/workspace/pgadmin4-master-python26/pgadmin-venv/lib/python2.6/site-packages/unittest2/suite.py",
line 87, in __call__
    return self.run(*args, **kwds)
  File "/var/lib/jenkins/workspace/pgadmin4-master-python26/pgadmin-venv/lib/python2.6/site-packages/unittest2/suite.py",
line 126, in run
    test(result)
  File "/var/lib/jenkins/workspace/pgadmin4-master-python26/pgadmin-venv/lib/python2.6/site-packages/unittest2/case.py",
line 673, in __call__
    return self.run(*args, **kwds)
  File "/var/lib/jenkins/workspace/pgadmin4-master-python26/pgadmin-venv/lib/python2.6/site-packages/unittest2/case.py",
line 633, in run
    self._feedErrorsToResult(result, outcome.errors)
  File "/var/lib/jenkins/workspace/pgadmin4-master-python26/pgadmin-venv/lib/python2.6/site-packages/unittest2/case.py",
line 563, in _feedErrorsToResult
    if issubclass(exc_info[0], self.failureException):
TypeError: issubclass() arg 2 must be a class or tuple of classes

For completeness, other issues outstanding that we've previously discussed:

- pgAdmin processes may remain running after test failures.

- The test suite may hang following a feature test failure, at the end
of the run.

- The screenshot functionality should be fixed (ideally) or removed.

- The tests really need to run with a single instantiation of pgAdmin.
It's clearly going to be far too slow to start/stop pgAdmin for every
test once we start adding more (and moving forward, I really want
feature tests to become the default to ensure we're end-to-end testing
everything). For reference, each test run (currently one version of
Python, against 5 different database servers) is now taking ~5 minutes
vs. 1m47s without the feature tests.

On the plus side, test runs are now green across the board with
feature tests enabled, except for Python 2.6 :-)

Thanks!

--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: [pgadmin-hackers] Feature test issues

От
Ashesh Vashi
Дата:
On Mon, Feb 27, 2017 at 9:32 AM, Atira Odhner <aodhner@pivotal.io> wrote:

Cool, we already have a task about proper teardown and a couple other things in our backlog. we'll probably get to it in the next day or so. I'll take a look at the other stuff.

Also, regarding speed, even without the app startup time, end to end tests are always going to be relatively slow. We definitely want to make sure that the time it takes to run the tests does not grow to where it is a deterrent to running them.
There are a variety of things we can do to help address that as our suite grows. For instance,  we could consider parallelizing the tests, making setup and teardown more efficient,  combining related tests, or even breaking the tests into suites and running only some of them locally by default.
Since we only have a couple feature tests so far the speed hasn't really felt like an issue for me yet, but I understand it may be different if you are trying to run in a variety of configurations.

Out of curiosity, what is the goal in supporting multiple python versions?

We support Python 2.6, 2.7, 3.3+. 

Are we working on moving to 3.x and just haven't gotten fully there yet?

There is no plan to move to Python 3 only.
We do support Python 2.6 too, so that - it works on system like Cento OS 6+ out of box.

We (at EnterpriseDB) test pgAdmin 4 with Python 2.6, 2.7, 3.3, 3.4, 3.5 & 3.6.

--
Thanks,
Ashesh Vashi

Tira


On Sun, Feb 26, 2017, 3:39 AM Dave Page <dpage@pgadmin.org> wrote:
Hi Tira, George,

I've just been updating our internal automated test system to run the
feature tests, and ran into a couple of additional issues that need to
be addressed. Can you look into the following please?

- When starting pgAdmin, it's using the default configuration database
(CONFIG['SQLITE_PATH']), however for testing we should be using
CONFIG['TEST_SQLITE_PATH']). This means that it's polluting the user's
default configuration (and if they don't have one, causing an
additional initialisation step).

- With Python 2.6, the following failure is seen when the first
feature test is run:

Traceback (most recent call last):
  File "/var/lib/jenkins/workspace/pgadmin4-master-python26/web/regression/runtests.py",
line 286, in <module>
    verbosity=2).run(suite)
  File "/var/lib/jenkins/workspace/pgadmin4-master-python26/pgadmin-venv/lib/python2.6/site-packages/unittest2/runner.py",
line 172, in run
    test(result)
  File "/var/lib/jenkins/workspace/pgadmin4-master-python26/pgadmin-venv/lib/python2.6/site-packages/unittest2/suite.py",
line 87, in __call__
    return self.run(*args, **kwds)
  File "/var/lib/jenkins/workspace/pgadmin4-master-python26/pgadmin-venv/lib/python2.6/site-packages/unittest2/suite.py",
line 126, in run
    test(result)
  File "/var/lib/jenkins/workspace/pgadmin4-master-python26/pgadmin-venv/lib/python2.6/site-packages/unittest2/case.py",
line 673, in __call__
    return self.run(*args, **kwds)
  File "/var/lib/jenkins/workspace/pgadmin4-master-python26/pgadmin-venv/lib/python2.6/site-packages/unittest2/case.py",
line 633, in run
    self._feedErrorsToResult(result, outcome.errors)
  File "/var/lib/jenkins/workspace/pgadmin4-master-python26/pgadmin-venv/lib/python2.6/site-packages/unittest2/case.py",
line 563, in _feedErrorsToResult
    if issubclass(exc_info[0], self.failureException):
TypeError: issubclass() arg 2 must be a class or tuple of classes

For completeness, other issues outstanding that we've previously discussed:

- pgAdmin processes may remain running after test failures.

- The test suite may hang following a feature test failure, at the end
of the run.

- The screenshot functionality should be fixed (ideally) or removed.

- The tests really need to run with a single instantiation of pgAdmin.
It's clearly going to be far too slow to start/stop pgAdmin for every
test once we start adding more (and moving forward, I really want
feature tests to become the default to ensure we're end-to-end testing
everything). For reference, each test run (currently one version of
Python, against 5 different database servers) is now taking ~5 minutes
vs. 1m47s without the feature tests.

On the plus side, test runs are now green across the board with
feature tests enabled, except for Python 2.6 :-)

Thanks!

--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: [pgadmin-hackers] Feature test issues

От
Dave Page
Дата:
On Mon, Feb 27, 2017 at 4:02 AM, Atira Odhner <aodhner@pivotal.io> wrote:
> Cool, we already have a task about proper teardown and a couple other things
> in our backlog. we'll probably get to it in the next day or so. I'll take a
> look at the other stuff.

Thanks.

> Also, regarding speed, even without the app startup time, end to end tests
> are always going to be relatively slow. We definitely want to make sure that
> the time it takes to run the tests does not grow to where it is a deterrent
> to running them.

Right - I expect them to be slower, but 1.5+ minutes per test (with 5
DB servers - we're soon going to have 10+) is not going to work. I
want to get us to the point where we're doing test driven development,
with the aim of always having the tree in a releasable state.

> There are a variety of things we can do to help address that as our suite
> grows. For instance,  we could consider parallelizing the tests, making
> setup and teardown more efficient,  combining related tests, or even
> breaking the tests into suites and running only some of them locally by
> default.
> Since we only have a couple feature tests so far the speed hasn't really
> felt like an issue for me yet, but I understand it may be different if you
> are trying to run in a variety of configurations.

I'm looking ahead to where we want to be. I don't want the test suite
to become a source of technical debt.

> Out of curiosity, what is the goal in supporting multiple python versions?
> Are we working on moving to 3.x and just haven't gotten fully there yet?

We need to support multiple versions of Python because that's what
users have on their systems. For example, RHEL/CentOS 6 which are
still in wide use ship with Python 2.6.

--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [pgadmin-hackers] Feature test issues

От
Dave Page
Дата:
On Sun, Feb 26, 2017 at 8:39 AM, Dave Page <dpage@pgadmin.org> wrote:
>
> On the plus side, test runs are now green across the board with
> feature tests enabled, except for Python 2.6 :-)

I say that, however...

- I've now seen the same error with Python 3.6 as with 2.6, yet whilst
it's consistent on 2.6, it's intermittent on 3.6. I've also seen it on
other versions of Python though, less often than on 3.6 or 2.6.

- One or both of the feature tests seem to be failing ~90% of the time
with 2.7. It seems almost random as to which one will fail, and on
which database server:

Traceback (most recent call last):
  File
"/var/lib/jenkins/workspace/pgadmin4-master-python27/web/pgadmin/feature_tests/connect_to_server_feature_test.py",
line 23, in setUp
    super(ConnectsToServerFeatureTest, self).setUp()
  File "/var/lib/jenkins/workspace/pgadmin4-master-python27/web/regression/feature_utils/base_feature_test.py",
line 19, in setUp
    self.page.wait_for_app()
  File "/var/lib/jenkins/workspace/pgadmin4-master-python27/web/regression/feature_utils/pgadmin_page.py",
line 110, in wait_for_app
    self._wait_for("app to start", page_shows_app)
  File "/var/lib/jenkins/workspace/pgadmin4-master-python27/web/regression/feature_utils/pgadmin_page.py",
line 124, in _wait_for
    raise AssertionError("timed out waiting for " + waiting_for_message)
AssertionError: timed out waiting for app to start

Unfortunately, I don't see the actual assertion message. It would be
nice to know what waiting_for_message contained.

--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [pgadmin-hackers] Feature test issues

От
Atira Odhner
Дата:

The team here at Pivotal is doing TDD already. Running so many configurations sounds like it belongs in a CI not in a TDD cycle.

I want to make sure we are making the most effective technical investment that we can. My thought process is generally:
Are we feeling pain from this right now? (is it slowing down development,  leading to confusion, bugs, etc?)

If not, are we painting ourselves into a corner? (Will it be an order of magnitude harder to make this investment later?)

How does this pain compare to other sources of technical pain?

Once we've decided to invest in easing a pain, I find it valuable to step back and try to see if there are other options that might also address it.

In the case of running multiple configurations,  I suspect you might get a 2 or 3x speedup of the entire suite by running tests in parallel.
The app startup time is pretty fast. It takes about two to three seconds. At this point, that would shave about 6 seconds per suite run or 30 seconds in your 5-server run vs 3-4 minutes that I expect would be saved through parallelization.

Regarding the python versions, have we considered packaging the app bundle with the latest version of python that we support? Then we could run tests with that version and the latest version of python that we are working towards supporting. That would save us the headache of trying to straddle the 2.x/3.x language gap.

Tira

On Mon, Feb 27, 2017, 4:53 AM Dave Page <dpage@pgadmin.org> wrote:
On Mon, Feb 27, 2017 at 4:02 AM, Atira Odhner <aodhner@pivotal.io> wrote:
> Cool, we already have a task about proper teardown and a couple other things
> in our backlog. we'll probably get to it in the next day or so. I'll take a
> look at the other stuff.

Thanks.

> Also, regarding speed, even without the app startup time, end to end tests
> are always going to be relatively slow. We definitely want to make sure that
> the time it takes to run the tests does not grow to where it is a deterrent
> to running them.

Right - I expect them to be slower, but 1.5+ minutes per test (with 5
DB servers - we're soon going to have 10+) is not going to work. I
want to get us to the point where we're doing test driven development,
with the aim of always having the tree in a releasable state.

> There are a variety of things we can do to help address that as our suite
> grows. For instance,  we could consider parallelizing the tests, making
> setup and teardown more efficient,  combining related tests, or even
> breaking the tests into suites and running only some of them locally by
> default.
> Since we only have a couple feature tests so far the speed hasn't really
> felt like an issue for me yet, but I understand it may be different if you
> are trying to run in a variety of configurations.

I'm looking ahead to where we want to be. I don't want the test suite
to become a source of technical debt.

> Out of curiosity, what is the goal in supporting multiple python versions?
> Are we working on moving to 3.x and just haven't gotten fully there yet?

We need to support multiple versions of Python because that's what
users have on their systems. For example, RHEL/CentOS 6 which are
still in wide use ship with Python 2.6.

--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: [pgadmin-hackers] Feature test issues

От
Dave Page
Дата:
On Mon, Feb 27, 2017 at 7:51 PM, Atira Odhner <aodhner@pivotal.io> wrote:
> The team here at Pivotal is doing TDD already. Running so many
> configurations sounds like it belongs in a CI not in a TDD cycle.

It is in a CI - that doesn't mean I don't want us to aim towards the
point where we're doing TDD as well.

> I want to make sure we are making the most effective technical investment
> that we can. My thought process is generally:
> Are we feeling pain from this right now? (is it slowing down development,
> leading to confusion, bugs, etc?)

Well the randomly failing tests are, yes. The speed isn't - but it
likely will if not addressed.

> If not, are we painting ourselves into a corner? (Will it be an order of
> magnitude harder to make this investment later?)

Possibly, if tests need to be rewritten for single-server-instance or
parallel execution.

> How does this pain compare to other sources of technical pain?

For me, aside from reviewing of patches which will always take time
and effort, getting the tests to run fully and reliably is my biggest
source of pain.

> Once we've decided to invest in easing a pain, I find it valuable to step
> back and try to see if there are other options that might also address it.
>
> In the case of running multiple configurations,  I suspect you might get a 2
> or 3x speedup of the entire suite by running tests in parallel.
> The app startup time is pretty fast. It takes about two to three seconds. At
> this point, that would shave about 6 seconds per suite run or 30 seconds in
> your 5-server run vs 3-4 minutes that I expect would be saved through
> parallelization.

On my top-of-the-range-last-year Mac, startup takes about 14 seconds
per test - that's from the start of the test, until the point at which
the "Loading pgAdmin..." spinner vanishes.

Running the tests in parallel on each DB server would certainly help I
agree, but given the current architecture of the feature tests when we
get to 100 tests in the suite we'll be wasting 23 minutes per run just
on startup time - and given that changing the tests to run in a single
browser session with a single pgAdmin server would almost certainly
require changes to the tests themselves, I think it's prudent to do
that now rather than when we actually have 100 of them.

> Regarding the python versions, have we considered packaging the app bundle
> with the latest version of python that we support? Then we could run tests
> with that version and the latest version of python that we are working
> towards supporting. That would save us the headache of trying to straddle
> the 2.x/3.x language gap.

The appbundle is just one package - and is one that only needs Python
2.7 as that's all macOS has supported for years. On Windows, we do
ship Python as part of the package as there is no Python interpreter
there by default.

However, we obviously cannot ship Python as part of the Python wheel,
nor is it practical to ship a private build of Python in RPM or DEB
packages (and would almost certainly be a breach of the distro
packaging rules if we did, thus preventing pgAdmin 4 from ever
shipping in EPEL or Canonical's package repos).

--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [pgadmin-hackers] Feature test issues

От
Atira Odhner
Дата:
On my top-of-the-range-last-year Mac, startup takes about 14 seconds
per test - that's from the start of the test, until the point at which
the "Loading pgAdmin..." spinner vanishes.

That includes the time it takes to load the page, not just app startup time.  The spinner doesn't appear until the app has started. We're working today on eliminating the multiple startups, but it won't have an effect on page load time. I think the only way to fix that test run time would be to actually improve the page load time. That said, we're seeing tests run much faster than you are-- 43 seconds for the entire suite of which 30 seconds was the two feature tests.

The appbundle is just one package - and is one that only needs Python
2.7 as that's all macOS has supported for years. On Windows, we do
ship Python as part of the package as there is no Python interpreter
there by default.
However, we obviously cannot ship Python as part of the Python wheel,
nor is it practical to ship a private build of Python in RPM or DEB
packages (and would almost certainly be a breach of the distro
packaging rules if we did, thus preventing pgAdmin 4 from ever
shipping in EPEL or Canonical's package repos).

Okay, I think that makes sense. Thanks for the explanation.

 Tira

On Tue, Feb 28, 2017 at 5:17 AM, Dave Page <dpage@pgadmin.org> wrote:
On Mon, Feb 27, 2017 at 7:51 PM, Atira Odhner <aodhner@pivotal.io> wrote:
> The team here at Pivotal is doing TDD already. Running so many
> configurations sounds like it belongs in a CI not in a TDD cycle.

It is in a CI - that doesn't mean I don't want us to aim towards the
point where we're doing TDD as well.

> I want to make sure we are making the most effective technical investment
> that we can. My thought process is generally:
> Are we feeling pain from this right now? (is it slowing down development,
> leading to confusion, bugs, etc?)

Well the randomly failing tests are, yes. The speed isn't - but it
likely will if not addressed.

> If not, are we painting ourselves into a corner? (Will it be an order of
> magnitude harder to make this investment later?)

Possibly, if tests need to be rewritten for single-server-instance or
parallel execution.

> How does this pain compare to other sources of technical pain?

For me, aside from reviewing of patches which will always take time
and effort, getting the tests to run fully and reliably is my biggest
source of pain.

> Once we've decided to invest in easing a pain, I find it valuable to step
> back and try to see if there are other options that might also address it.
>
> In the case of running multiple configurations,  I suspect you might get a 2
> or 3x speedup of the entire suite by running tests in parallel.
> The app startup time is pretty fast. It takes about two to three seconds. At
> this point, that would shave about 6 seconds per suite run or 30 seconds in
> your 5-server run vs 3-4 minutes that I expect would be saved through
> parallelization.

On my top-of-the-range-last-year Mac, startup takes about 14 seconds
per test - that's from the start of the test, until the point at which
the "Loading pgAdmin..." spinner vanishes.

Running the tests in parallel on each DB server would certainly help I
agree, but given the current architecture of the feature tests when we
get to 100 tests in the suite we'll be wasting 23 minutes per run just
on startup time - and given that changing the tests to run in a single
browser session with a single pgAdmin server would almost certainly
require changes to the tests themselves, I think it's prudent to do
that now rather than when we actually have 100 of them.

> Regarding the python versions, have we considered packaging the app bundle
> with the latest version of python that we support? Then we could run tests
> with that version and the latest version of python that we are working
> towards supporting. That would save us the headache of trying to straddle
> the 2.x/3.x language gap.

The appbundle is just one package - and is one that only needs Python
2.7 as that's all macOS has supported for years. On Windows, we do
ship Python as part of the package as there is no Python interpreter
there by default.

However, we obviously cannot ship Python as part of the Python wheel,
nor is it practical to ship a private build of Python in RPM or DEB
packages (and would almost certainly be a breach of the distro
packaging rules if we did, thus preventing pgAdmin 4 from ever
shipping in EPEL or Canonical's package repos).

--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: [pgadmin-hackers] Feature test issues

От
Dave Page
Дата:
On Tue, Feb 28, 2017 at 4:50 PM, Atira Odhner <aodhner@pivotal.io> wrote:
>> On my top-of-the-range-last-year Mac, startup takes about 14 seconds
>> per test - that's from the start of the test, until the point at which
>> the "Loading pgAdmin..." spinner vanishes.
>
>
> That includes the time it takes to load the page, not just app startup time.
> The spinner doesn't appear until the app has started. We're working today on
> eliminating the multiple startups, but it won't have an effect on page load
> time. I think the only way to fix that test run time would be to actually
> improve the page load time. That said, we're seeing tests run much faster
> than you are-- 43 seconds for the entire suite of which 30 seconds was the
> two feature tests.

Right - when I talk about having a single startup, I'm not just
talking about the app server, but the browser and initial page load as
well. It may pay to have a function we can call that would reset  the
treeview and the tabset, thus allowing us to reset the view cheaply
between groups of tests (assuming we might have a number of tests that
could all run in the same query tool instance for example).



--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [pgadmin-hackers] Feature test issues

От
Atira Odhner
Дата:
Just to clarify the change we made in this regard --- we use the 'reset layout' link in the header between tests, but it essentially still does a page load and doesn't save us time. If we were to do an actual page load we would run into issues because of the navigation confirmation popup. If we were to manually reset the layout I think it would be very difficult to account for all the various scenarios and ensure a reasonable separation between tests.

Tira

On Tue, Feb 28, 2017 at 11:59 AM, Dave Page <dpage@pgadmin.org> wrote:
On Tue, Feb 28, 2017 at 4:50 PM, Atira Odhner <aodhner@pivotal.io> wrote:
>> On my top-of-the-range-last-year Mac, startup takes about 14 seconds
>> per test - that's from the start of the test, until the point at which
>> the "Loading pgAdmin..." spinner vanishes.
>
>
> That includes the time it takes to load the page, not just app startup time.
> The spinner doesn't appear until the app has started. We're working today on
> eliminating the multiple startups, but it won't have an effect on page load
> time. I think the only way to fix that test run time would be to actually
> improve the page load time. That said, we're seeing tests run much faster
> than you are-- 43 seconds for the entire suite of which 30 seconds was the
> two feature tests.

Right - when I talk about having a single startup, I'm not just
talking about the app server, but the browser and initial page load as
well. It may pay to have a function we can call that would reset  the
treeview and the tabset, thus allowing us to reset the view cheaply
between groups of tests (assuming we might have a number of tests that
could all run in the same query tool instance for example).



--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Re: [pgadmin-hackers] Feature test issues

От
Dave Page
Дата:
On Thu, Mar 2, 2017 at 2:36 PM, Atira Odhner <aodhner@pivotal.io> wrote:
> Just to clarify the change we made in this regard --- we use the 'reset
> layout' link in the header between tests, but it essentially still does a
> page load and doesn't save us time. If we were to do an actual page load we
> would run into issues because of the navigation confirmation popup. If we
> were to manually reset the layout I think it would be very difficult to
> account for all the various scenarios and ensure a reasonable separation
> between tests.

Understood. I think that's fine - the further optimisation would be to
think about test groups, where tests can be safely undertaken in
sequence without a reset in-between. I think it would be reasonable in
such cases to abort the group if a single test fails, thus potentially
leaving the UI in an unknown state. That would potentially save a lot
of time when testing the debugger or query tool, where you would have
to navigate to a suitable node and then load the tool to even begin
testing.

--
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter: @pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company