EXPLAIN
EXPLAIN — показать план выполнения оператора
Синтаксис
EXPLAIN [ (параметр
[, ...] ) ]оператор
EXPLAIN [ ANALYZE ] [ VERBOSE ]оператор
Здесь допускаетсяпараметр
: ANALYZE [boolean
] VERBOSE [boolean
] COSTS [boolean
] SETTINGS [boolean
] BUFFERS [boolean
] WAL [boolean
] TIMING [boolean
] SUMMARY [boolean
] FORMAT { TEXT | XML | JSON | YAML }
Описание
Эта команда выводит план выполнения, генерируемый планировщиком Postgres Pro для заданного оператора. План выполнения показывает, как будут сканироваться таблицы, затрагиваемые оператором — просто последовательно, по индексу и т. д. — а если запрос связывает несколько таблиц, какой алгоритм соединения будет выбран для объединения считанных из них строк.
Наибольший интерес в выводимой информации представляет ожидаемая стоимость выполнения оператора, которая показывает, сколько, по мнению планировщика, будет выполняться этот оператор (это значение измеряется в единицах стоимости, которые не имеют точного определения, но обычно это обращение к странице на диске). Фактически выводятся два числа: стоимость запуска до выдачи первой строки и общая стоимость выдачи всех строк. Для большинства запросов важна общая стоимость, но в таких контекстах, как подзапрос в EXISTS
, планировщик будет минимизировать стоимость запуска, а не общую стоимость (так как исполнение запроса всё равно завершится сразу после получения одной строки). Кроме того, если количество возвращаемых строк ограничивается предложением LIMIT
, планировщик интерполирует стоимость между двумя этими числами, выбирая наиболее выгодный план.
С параметром ANALYZE
оператор будет выполнен на самом деле, а не только запланирован. При этом в вывод добавляются фактические сведения о времени выполнения, включая общее время, затраченное на каждый узел плана (в миллисекундах) и общее число строк, выданных в результате. Это помогает понять, насколько близки к реальности предварительные оценки планировщика.
Важно
Имейте в виду, что с указанием ANALYZE
оператор действительно выполняется. Хотя EXPLAIN
отбрасывает результат, который вернул бы SELECT
, в остальном все действия выполняются как обычно. Если вы хотите выполнить EXPLAIN ANALYZE
с командой INSERT
, UPDATE
, DELETE
, MERGE
, CREATE TABLE AS
или EXECUTE
, не допуская изменения данных этой командой, воспользуйтесь таким приёмом:
BEGIN; EXPLAIN ANALYZE ...; ROLLBACK;
Без скобок для этого оператора можно указать только параметры ANALYZE
и VERBOSE
и только в таком порядке. В PostgreSQL до версии 9.0 поддерживался только синтаксис без скобок, однако в дальнейшем ожидается, что все новые параметры будут восприниматься только в скобках.
Параметры
ANALYZE
Выполнить команду и вывести фактическое время выполнения и другую статистику. По умолчанию этот параметр равен
FALSE
.VERBOSE
Вывести дополнительную информацию о плане запроса. В частности, включить список столбцов результата для каждого узла в дереве плана, дополнить схемой имена таблиц и функций, всегда указывать для переменных в выражениях псевдоним их таблицы, а также выводить имена всех триггеров, для которых выдаётся статистика. Также для данного запроса будет выводиться идентификатор, если он был вычислен; за подробностями обратитесь к compute_query_id. По умолчанию этот параметр равен
FALSE
.COSTS
Вывести рассчитанную стоимость запуска и общую стоимость каждого узла плана, а также рассчитанное число строк и ширину каждой строки. Этот параметр по умолчанию равен
TRUE
.SETTINGS
Вывести информацию о параметрах конфигурации. А именно, будут выведены параметры, влияющие на планирование, значения которых отличаются от стандартных значений по умолчанию. В отсутствие данного указания подразумевается
FALSE
(эта информация не выводится).BUFFERS
Включить информацию об использовании буфера. В частности, вывести число попаданий, блоков прочитанных, загрязнённых и записанных в разделяемом и локальном буфере, число прочитанных и записанных временных блоков, а также время в миллисекундах, потраченное на чтение и запись блоков файлов данных и блоков временных файлов, если включён параметр track_io_timing. Попаданием (hit) считается ситуация, когда требуемый блок уже находится в кеше и чтения с диска удаётся избежать. Блоки в общем буфере содержат данные обычных таблиц и индексов, в локальном — данные временных таблиц и индексов, а временные блоки предназначены для краткосрочного использования при выполнении сортировки, хеширования, материализации и подобных узлов плана. Число загрязнённых блоков (dirtied) показывает, сколько ранее не модифицированных блоков изменила данная операция; тогда как число записанных блоков (written) показывает, сколько ранее загрязнённых блоков данный серверный процесс вынес из кеша при обработке запроса. Значения, указываемые для узла верхнего уровня, включают значения всех его дочерних узлов. В текстовом формате выводятся только ненулевые значения. Этот параметр действует только в режиме
ANALYZE
. По умолчанию его значение равноFALSE
.WAL
Включить информацию о формировании записей WAL. В частности, вывести число записей, число полных образов страниц (fpi, full page images) и объём сгенерированных записей в байтах. В текстовом формате выводятся только ненулевые значения. Этот параметр может использоваться, только если также включён режим
ANALYZE
. По умолчанию он отключён (FALSE
).TIMING
Включить в вывод фактическое время запуска и время, затраченное на каждый узел. Постоянное чтение системных часов может значительно замедлить запрос, так что если достаточно знать фактическое число строк, имеет смысл сделать этот параметр равным
FALSE
. Время выполнения всего оператора замеряется всегда, даже когда этот параметр выключен и на уровне узлов время не подсчитывается. Этот параметр действует только в режимеANALYZE
. По умолчанию его значение равноTRUE
.SUMMARY
Включить сводку (например, суммарное время) после плана запроса. Сводка включается по умолчанию, когда используется
ANALYZE
, но этот параметр позволяет получить её и с другими вариантами команды. Время планирования вEXPLAIN EXECUTE
включает время извлечения плана из кеша и время перепланирования, если оно потребовалось.FORMAT
Установить один из следующих форматов вывода: TEXT, XML, JSON или YAML. Последние три формата содержат ту же информацию, что и текстовый, но больше подходят для программного разбора. По умолчанию выбирается формат
TEXT
.boolean
Включает или отключает заданный параметр. Для включения параметра можно написать
TRUE
,ON
или1
, а для отключения —FALSE
,OFF
или0
. Значениеboolean
можно опустить, в этом случае подразумеваетсяTRUE
.оператор
Любой оператор
SELECT
,INSERT
,UPDATE
,DELETE
,MERGE
,VALUES
,EXECUTE
,DECLARE
,CREATE TABLE AS
иCREATE MATERIALIZED VIEW AS
, план выполнения которого вас интересует.
Выводимая информация
Результатом команды будет текстовое описание плана, выбранного для оператора
, возможно, дополненное статистикой выполнения. Представленная информация описана в Разделе 14.1.
Примечания
Чтобы планировщик запросов Postgres Pro был достаточно информирован для эффективной оптимизации запросов, данные в pg_statistic
должны быть актуальными для всех таблиц, задействованных в запросе. Обычно об этом автоматически заботится демон автоочистки. Но если в таблице недавно произошли значительные изменения, может потребоваться вручную выполнить ANALYZE
, не дожидаясь, пока автоочистка обработает эти изменения.
Измеряя фактическую стоимость выполнения каждого узла в плане, текущая реализация EXPLAIN ANALYZE
привносит накладные расходы профилирования в выполнение запроса. В результате этого, при запуске запроса командой EXPLAIN ANALYZE
он может выполняться значительно дольше, чем при обычном выполнении. Объём накладных расходов зависит от природы запроса, а также от используемой платформы. Худшая ситуация наблюдается для узлов плана, которые сами по себе выполняются очень быстро, и в операционных системах, где получение текущего времени относительно длительная операция.
Примеры
Получение плана простого запроса для таблицы, содержащей единственный столбец типа integer
, с 10000 строк:
EXPLAIN SELECT * FROM foo; QUERY PLAN --------------------------------------------------------- Seq Scan on foo (cost=0.00..155.00 rows=10000 width=4) (1 row)
План того же запроса, но выведенный в формате JSON:
EXPLAIN (FORMAT JSON) SELECT * FROM foo; QUERY PLAN -------------------------------- [ + { + "Plan": { + "Node Type": "Seq Scan",+ "Relation Name": "foo", + "Alias": "foo", + "Startup Cost": 0.00, + "Total Cost": 155.00, + "Plan Rows": 10000, + "Plan Width": 4 + } + } + ] (1 row)
Если в таблице есть индекс, а в запросе присутствует условие WHERE
, для которого полезен этот индекс, EXPLAIN
может показать другой план:
EXPLAIN SELECT * FROM foo WHERE i = 4; QUERY PLAN -------------------------------------------------------------- Index Scan using fi on foo (cost=0.00..5.98 rows=1 width=4) Index Cond: (i = 4) (2 rows)
План того же запроса, но в формате YAML:
EXPLAIN (FORMAT YAML) SELECT * FROM foo WHERE i='4'; QUERY PLAN ------------------------------- - Plan: + Node Type: "Index Scan" + Scan Direction: "Forward"+ Index Name: "fi" + Relation Name: "foo" + Alias: "foo" + Startup Cost: 0.00 + Total Cost: 5.98 + Plan Rows: 1 + Plan Width: 4 + Index Cond: "(i = 4)" (1 row)
Рассмотрение формата XML оставлено в качестве упражнения для читателя.
План того же запроса без вывода оценок стоимости:
EXPLAIN (COSTS FALSE) SELECT * FROM foo WHERE i = 4; QUERY PLAN ---------------------------- Index Scan using fi on foo Index Cond: (i = 4) (2 rows)
Пример плана для запроса с агрегатной функцией:
EXPLAIN SELECT sum(i) FROM foo WHERE i < 10; QUERY PLAN --------------------------------------------------------------------- Aggregate (cost=23.93..23.93 rows=1 width=4) -> Index Scan using fi on foo (cost=0.00..23.92 rows=6 width=4) Index Cond: (i < 10) (3 rows)
Пример использования EXPLAIN EXECUTE
для отображения плана выполнения подготовленного запроса:
PREPARE query(int, int) AS SELECT sum(bar) FROM test WHERE id > $1 AND id < $2 GROUP BY foo; EXPLAIN ANALYZE EXECUTE query(100, 200); QUERY PLAN ------------------------------------------------------------------------------------------------------------------------ HashAggregate (cost=9.54..9.54 rows=1 width=8) (actual time=0.156..0.161 rows=11 loops=1) Group Key: foo -> Index Scan using test_pkey on test (cost=0.29..9.29 rows=50 width=8) (actual time=0.039..0.091 rows=99 loops=1) Index Cond: ((id > $1) AND (id < $2)) Planning time: 0.197 ms Execution time: 0.225 ms (6 rows)
Разумеется, конкретные числа, показанные здесь, зависят от фактического содержимого задействованных таблиц. Также учтите, что эти числа и даже выбранная стратегия выполнения запроса могут меняться от версии к версии Postgres Pro вследствие усовершенствования планировщика. Кроме того, команда ANALYZE
при обработке статистических данных производит случайные выборки, так что оценки стоимости могут меняться при каждом чистом запуске ANALYZE
, даже когда фактическое распределение данных в таблице не меняется.
Совместимость
Оператор EXPLAIN
отсутствует в стандарте SQL.
См. также
ANALYZE4.2. Value Expressions
- 4.2.1. Column References
- 4.2.2. Positional Parameters
- 4.2.3. Subscripts
- 4.2.4. Field Selection
- 4.2.5. Operator Invocations
- 4.2.6. Function Calls
- 4.2.7. Aggregate Expressions
- 4.2.8. Window Function Calls
- 4.2.9. Type Casts
- 4.2.10. Collation Expressions
- 4.2.11. Scalar Subqueries
- 4.2.12. Array Constructors
- 4.2.13. Row Constructors
- 4.2.14. Expression Evaluation Rules
Value expressions are used in a variety of contexts, such as in the target list of the SELECT
command, as new column values in INSERT
or UPDATE
, or in search conditions in a number of commands. The result of a value expression is sometimes called a scalar, to distinguish it from the result of a table expression (which is a table). Value expressions are therefore also called scalar expressions (or even simply expressions). The expression syntax allows the calculation of values from primitive parts using arithmetic, logical, set, and other operations.
A value expression is one of the following:
A constant or literal value
A column reference
A positional parameter reference, in the body of a function definition or prepared statement
A subscripted expression
A field selection expression
An operator invocation
A function call
An aggregate expression
A window function call
A type cast
A collation expression
A scalar subquery
An array constructor
A row constructor
Another value expression in parentheses (used to group subexpressions and override precedence)
In addition to this list, there are a number of constructs that can be classified as an expression but do not follow any general syntax rules. These generally have the semantics of a function or operator and are explained in the appropriate location in Chapter 9. An example is the IS NULL
clause.
We have already discussed constants in Section 4.1.2. The following sections discuss the remaining options.
4.2.1. Column References
A column can be referenced in the form:
correlation
.columnname
correlation
is the name of a table (possibly qualified with a schema name), or an alias for a table defined by means of a FROM
clause. The correlation name and separating dot can be omitted if the column name is unique across all the tables being used in the current query. (See also Chapter 7.)
4.2.2. Positional Parameters
A positional parameter reference is used to indicate a value that is supplied externally to an SQL statement. Parameters are used in SQL function definitions and in prepared queries. Some client libraries also support specifying data values separately from the SQL command string, in which case parameters are used to refer to the out-of-line data values. The form of a parameter reference is:
$number
For example, consider the definition of a function, dept
, as:
CREATE FUNCTION dept(text) RETURNS dept AS $$ SELECT * FROM dept WHERE name = $1 $$ LANGUAGE SQL;
Here the $1
references the value of the first function argument whenever the function is invoked.
4.2.3. Subscripts
If an expression yields a value of an array type, then a specific element of the array value can be extracted by writing
expression
[subscript
]
or multiple adjacent elements (an “array slice”) can be extracted by writing
expression
[lower_subscript
:upper_subscript
]
(Here, the brackets [ ]
are meant to appear literally.) Each subscript
is itself an expression, which will be rounded to the nearest integer value.
In general the array expression
must be parenthesized, but the parentheses can be omitted when the expression to be subscripted is just a column reference or positional parameter. Also, multiple subscripts can be concatenated when the original array is multidimensional. For example:
mytable.arraycolumn[4] mytable.two_d_column[17][34] $1[10:42] (arrayfunction(a,b))[42]
The parentheses in the last example are required. See Section 8.15 for more about arrays.
4.2.4. Field Selection
If an expression yields a value of a composite type (row type), then a specific field of the row can be extracted by writing
expression
.fieldname
In general the row expression
must be parenthesized, but the parentheses can be omitted when the expression to be selected from is just a table reference or positional parameter. For example:
mytable.mycolumn $1.somecolumn (rowfunction(a,b)).col3
(Thus, a qualified column reference is actually just a special case of the field selection syntax.) An important special case is extracting a field from a table column that is of a composite type:
(compositecol).somefield (mytable.compositecol).somefield
The parentheses are required here to show that compositecol
is a column name not a table name, or that mytable
is a table name not a schema name in the second case.
You can ask for all fields of a composite value by writing .*
:
(compositecol).*
This notation behaves differently depending on context; see Section 8.16.5 for details.
4.2.5. Operator Invocations
There are three possible syntaxes for an operator invocation:
expression operator expression (binary infix operator) |
operator expression (unary prefix operator) |
expression operator (unary postfix operator) |
where the operator
token follows the syntax rules of Section 4.1.3, or is one of the key words AND
, OR
, and NOT
, or is a qualified operator name in the form:
OPERATOR(
schema
.
operatorname
)
Which particular operators exist and whether they are unary or binary depends on what operators have been defined by the system or the user. Chapter 9 describes the built-in operators.
4.2.6. Function Calls
The syntax for a function call is the name of a function (possibly qualified with a schema name), followed by its argument list enclosed in parentheses:
function_name
([expression
[,expression
... ]] )
For example, the following computes the square root of 2:
sqrt(2)
The list of built-in functions is in Chapter 9. Other functions can be added by the user.
When issuing queries in a database where some users mistrust other users, observe security precautions from Section 10.3 when writing function calls.
The arguments can optionally have names attached. See Section 4.3 for details.
Note
A function that takes a single argument of composite type can optionally be called using field-selection syntax, and conversely field selection can be written in functional style. That is, the notations col(table)
and table.col
are interchangeable. This behavior is not SQL-standard but is provided in PostgreSQL because it allows use of functions to emulate “computed fields”. For more information see Section 8.16.5.
4.2.7. Aggregate Expressions
An aggregate expression represents the application of an aggregate function across the rows selected by a query. An aggregate function reduces multiple inputs to a single output value, such as the sum or average of the inputs. The syntax of an aggregate expression is one of the following:
aggregate_name
(expression
[ , ... ] [order_by_clause
] ) [ FILTER ( WHEREfilter_clause
) ]aggregate_name
(ALLexpression
[ , ... ] [order_by_clause
] ) [ FILTER ( WHEREfilter_clause
) ]aggregate_name
(DISTINCTexpression
[ , ... ] [order_by_clause
] ) [ FILTER ( WHEREfilter_clause
) ]aggregate_name
( * ) [ FILTER ( WHEREfilter_clause
) ]aggregate_name
( [expression
[ , ... ] ] ) WITHIN GROUP (order_by_clause
) [ FILTER ( WHEREfilter_clause
) ]
where aggregate_name
is a previously defined aggregate (possibly qualified with a schema name) and expression
is any value expression that does not itself contain an aggregate expression or a window function call. The optional order_by_clause
and filter_clause
are described below.
The first form of aggregate expression invokes the aggregate once for each input row. The second form is the same as the first, since ALL
is the default. The third form invokes the aggregate once for each distinct value of the expression (or distinct set of values, for multiple expressions) found in the input rows. The fourth form invokes the aggregate once for each input row; since no particular input value is specified, it is generally only useful for the count(*)
aggregate function. The last form is used with ordered-set aggregate functions, which are described below.
Most aggregate functions ignore null inputs, so that rows in which one or more of the expression(s) yield null are discarded. This can be assumed to be true, unless otherwise specified, for all built-in aggregates.
For example, count(*)
yields the total number of input rows; count(f1)
yields the number of input rows in which f1
is non-null, since count
ignores nulls; and count(distinct f1)
yields the number of distinct non-null values of f1
.
Ordinarily, the input rows are fed to the aggregate function in an unspecified order. In many cases this does not matter; for example, min
produces the same result no matter what order it receives the inputs in. However, some aggregate functions (such as array_agg
and string_agg
) produce results that depend on the ordering of the input rows. When using such an aggregate, the optional order_by_clause
can be used to specify the desired ordering. The order_by_clause
has the same syntax as for a query-level ORDER BY
clause, as described in Section 7.5, except that its expressions are always just expressions and cannot be output-column names or numbers. For example:
SELECT array_agg(a ORDER BY b DESC) FROM table;
When dealing with multiple-argument aggregate functions, note that the ORDER BY
clause goes after all the aggregate arguments. For example, write this:
SELECT string_agg(a, ',' ORDER BY a) FROM table;
not this:
SELECT string_agg(a ORDER BY a, ',') FROM table; -- incorrect
The latter is syntactically valid, but it represents a call of a single-argument aggregate function with two ORDER BY
keys (the second one being rather useless since it's a constant).
If DISTINCT
is specified in addition to an order_by_clause
, then all the ORDER BY
expressions must match regular arguments of the aggregate; that is, you cannot sort on an expression that is not included in the DISTINCT
list.
Note
The ability to specify both DISTINCT
and ORDER BY
in an aggregate function is a PostgreSQL extension.
Placing ORDER BY
within the aggregate's regular argument list, as described so far, is used when ordering the input rows for a “normal” aggregate for which ordering is optional. There is a subclass of aggregate functions called ordered-set aggregates for which an order_by_clause
is required, usually because the aggregate's computation is only sensible in terms of a specific ordering of its input rows. Typical examples of ordered-set aggregates include rank and percentile calculations. For an ordered-set aggregate, the order_by_clause
is written inside WITHIN GROUP (...)
, as shown in the final syntax alternative above. The expressions in the order_by_clause
are evaluated once per input row just like normal aggregate arguments, sorted as per the order_by_clause
's requirements, and fed to the aggregate function as input arguments. (This is unlike the case for a non-WITHIN GROUP
order_by_clause
, which is not treated as argument(s) to the aggregate function.) The argument expressions preceding WITHIN GROUP
, if any, are called direct arguments to distinguish them from the aggregated arguments listed in the order_by_clause
. Unlike normal aggregate arguments, direct arguments are evaluated only once per aggregate call, not once per input row. This means that they can contain variables only if those variables are grouped by GROUP BY
; this restriction is the same as if the direct arguments were not inside an aggregate expression at all. Direct arguments are typically used for things like percentile fractions, which only make sense as a single value per aggregation calculation. The direct argument list can be empty; in this case, write just ()
not (*)
. (PostgreSQL will actually accept either spelling, but only the first way conforms to the SQL standard.)
An example of an ordered-set aggregate call is:
SELECT percentile_cont(0.5) WITHIN GROUP (ORDER BY income) FROM households; percentile_cont ----------------- 50489
which obtains the 50th percentile, or median, value of the income
column from table households
. Here, 0.5
is a direct argument; it would make no sense for the percentile fraction to be a value varying across rows.
If FILTER
is specified, then only the input rows for which the filter_clause
evaluates to true are fed to the aggregate function; other rows are discarded. For example:
SELECT count(*) AS unfiltered, count(*) FILTER (WHERE i < 5) AS filtered FROM generate_series(1,10) AS s(i); unfiltered | filtered ------------+---------- 10 | 4 (1 row)
The predefined aggregate functions are described in Section 9.20. Other aggregate functions can be added by the user.
An aggregate expression can only appear in the result list or HAVING
clause of a SELECT
command. It is forbidden in other clauses, such as WHERE
, because those clauses are logically evaluated before the results of aggregates are formed.
When an aggregate expression appears in a subquery (see Section 4.2.11 and Section 9.22), the aggregate is normally evaluated over the rows of the subquery. But an exception occurs if the aggregate's arguments (and filter_clause
if any) contain only outer-level variables: the aggregate then belongs to the nearest such outer level, and is evaluated over the rows of that query. The aggregate expression as a whole is then an outer reference for the subquery it appears in, and acts as a constant over any one evaluation of that subquery. The restriction about appearing only in the result list or HAVING
clause applies with respect to the query level that the aggregate belongs to.
4.2.8. Window Function Calls
A window function call represents the application of an aggregate-like function over some portion of the rows selected by a query. Unlike regular aggregate function calls, this is not tied to grouping of the selected rows into a single output row — each row remains separate in the query output. However the window function is able to scan all the rows that would be part of the current row's group according to the grouping specification (PARTITION BY
list) of the window function call. The syntax of a window function call is one of the following:
function_name
([expression
[,expression
... ]]) [ FILTER ( WHEREfilter_clause
) ] OVERwindow_name
function_name
([expression
[,expression
... ]]) [ FILTER ( WHEREfilter_clause
) ] OVER (window_definition
)function_name
( * ) [ FILTER ( WHEREfilter_clause
) ] OVERwindow_name
function_name
( * ) [ FILTER ( WHEREfilter_clause
) ] OVER (window_definition
)
where window_definition
has the syntax
[existing_window_name
] [ PARTITION BYexpression
[, ...] ] [ ORDER BYexpression
[ ASC | DESC | USINGoperator
] [ NULLS { FIRST | LAST } ] [, ...] ] [frame_clause
]
and the optional frame_clause
can be one of
{ RANGE | ROWS }frame_start
{ RANGE | ROWS } BETWEENframe_start
ANDframe_end
where frame_start
and frame_end
can be one of
UNBOUNDED PRECEDINGvalue
PRECEDING CURRENT ROWvalue
FOLLOWING UNBOUNDED FOLLOWING
Here, expression
represents any value expression that does not itself contain window function calls.
window_name
is a reference to a named window specification defined in the query's WINDOW
clause. Alternatively, a full window_definition
can be given within parentheses, using the same syntax as for defining a named window in the WINDOW
clause; see the SELECT reference page for details. It's worth pointing out that OVER wname
is not exactly equivalent to OVER (wname)
; the latter implies copying and modifying the window definition, and will be rejected if the referenced window specification includes a frame clause.
The PARTITION BY
option groups the rows of the query into partitions, which are processed separately by the window function. PARTITION BY
works similarly to a query-level GROUP BY
clause, except that its expressions are always just expressions and cannot be output-column names or numbers. Without PARTITION BY
, all rows produced by the query are treated as a single partition. The ORDER BY
option determines the order in which the rows of a partition are processed by the window function. It works similarly to a query-level ORDER BY
clause, but likewise cannot use output-column names or numbers. Without ORDER BY
, rows are processed in an unspecified order.
The frame_clause
specifies the set of rows constituting the window frame, which is a subset of the current partition, for those window functions that act on the frame instead of the whole partition. The frame can be specified in either RANGE
or ROWS
mode; in either case, it runs from the frame_start
to the frame_end
. If frame_end
is omitted, it defaults to CURRENT ROW
.
A frame_start
of UNBOUNDED PRECEDING
means that the frame starts with the first row of the partition, and similarly a frame_end
of UNBOUNDED FOLLOWING
means that the frame ends with the last row of the partition.
In RANGE
mode, a frame_start
of CURRENT ROW
means the frame starts with the current row's first peer row (a row that ORDER BY
considers equivalent to the current row), while a frame_end
of CURRENT ROW
means the frame ends with the last equivalent ORDER BY
peer. In ROWS
mode, CURRENT ROW
simply means the current row.
The value
PRECEDING
and value
FOLLOWING
cases are currently only allowed in ROWS
mode. They indicate that the frame starts or ends the specified number of rows before or after the current row. value
must be an integer expression not containing any variables, aggregate functions, or window functions. The value must not be null or negative; but it can be zero, which just selects the current row.
The default framing option is RANGE UNBOUNDED PRECEDING
, which is the same as RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
. With ORDER BY
, this sets the frame to be all rows from the partition start up through the current row's last ORDER BY
peer. Without ORDER BY
, all rows of the partition are included in the window frame, since all rows become peers of the current row.
Restrictions are that frame_start
cannot be UNBOUNDED FOLLOWING
, frame_end
cannot be UNBOUNDED PRECEDING
, and the frame_end
choice cannot appear earlier in the above list than the frame_start
choice — for example RANGE BETWEEN CURRENT ROW AND
is not allowed. value
PRECEDING
If FILTER
is specified, then only the input rows for which the filter_clause
evaluates to true are fed to the window function; other rows are discarded. Only window functions that are aggregates accept a FILTER
clause.
The built-in window functions are described in Table 9.56. Other window functions can be added by the user. Also, any built-in or user-defined normal aggregate function can be used as a window function. Ordered-set aggregates presently cannot be used as window functions, however.
The syntaxes using *
are used for calling parameter-less aggregate functions as window functions, for example count(*) OVER (PARTITION BY x ORDER BY y)
. The asterisk (*
) is customarily not used for non-aggregate window functions. Aggregate window functions, unlike normal aggregate functions, do not allow DISTINCT
or ORDER BY
to be used within the function argument list.
Window function calls are permitted only in the SELECT
list and the ORDER BY
clause of the query.
More information about window functions can be found in Section 3.5, Section 9.21, and Section 7.2.5.
4.2.9. Type Casts
A type cast specifies a conversion from one data type to another. PostgreSQL accepts two equivalent syntaxes for type casts:
CAST (expression
AStype
)expression
::type
The CAST
syntax conforms to SQL; the syntax with ::
is historical PostgreSQL usage.
When a cast is applied to a value expression of a known type, it represents a run-time type conversion. The cast will succeed only if a suitable type conversion operation has been defined. Notice that this is subtly different from the use of casts with constants, as shown in Section 4.1.2.7. A cast applied to an unadorned string literal represents the initial assignment of a type to a literal constant value, and so it will succeed for any type (if the contents of the string literal are acceptable input syntax for the data type).
An explicit type cast can usually be omitted if there is no ambiguity as to the type that a value expression must produce (for example, when it is assigned to a table column); the system will automatically apply a type cast in such cases. However, automatic casting is only done for casts that are marked “OK to apply implicitly” in the system catalogs. Other casts must be invoked with explicit casting syntax. This restriction is intended to prevent surprising conversions from being applied silently.
It is also possible to specify a type cast using a function-like syntax:
typename
(expression
)
However, this only works for types whose names are also valid as function names. For example, double precision
cannot be used this way, but the equivalent float8
can. Also, the names interval
, time
, and timestamp
can only be used in this fashion if they are double-quoted, because of syntactic conflicts. Therefore, the use of the function-like cast syntax leads to inconsistencies and should probably be avoided.
Note
The function-like syntax is in fact just a function call. When one of the two standard cast syntaxes is used to do a run-time conversion, it will internally invoke a registered function to perform the conversion. By convention, these conversion functions have the same name as their output type, and thus the “function-like syntax” is nothing more than a direct invocation of the underlying conversion function. Obviously, this is not something that a portable application should rely on. For further details see CREATE CAST.
4.2.10. Collation Expressions
The COLLATE
clause overrides the collation of an expression. It is appended to the expression it applies to:
expr
COLLATEcollation
where collation
is a possibly schema-qualified identifier. The COLLATE
clause binds tighter than operators; parentheses can be used when necessary.
If no collation is explicitly specified, the database system either derives a collation from the columns involved in the expression, or it defaults to the default collation of the database if no column is involved in the expression.
The two common uses of the COLLATE
clause are overriding the sort order in an ORDER BY
clause, for example:
SELECT a, b, c FROM tbl WHERE ... ORDER BY a COLLATE "C";
and overriding the collation of a function or operator call that has locale-sensitive results, for example:
SELECT * FROM tbl WHERE a > 'foo' COLLATE "C";
Note that in the latter case the COLLATE
clause is attached to an input argument of the operator we wish to affect. It doesn't matter which argument of the operator or function call the COLLATE
clause is attached to, because the collation that is applied by the operator or function is derived by considering all arguments, and an explicit COLLATE
clause will override the collations of all other arguments. (Attaching non-matching COLLATE
clauses to more than one argument, however, is an error. For more details see Section 23.2.) Thus, this gives the same result as the previous example:
SELECT * FROM tbl WHERE a COLLATE "C" > 'foo';
But this is an error:
SELECT * FROM tbl WHERE (a > 'foo') COLLATE "C";
because it attempts to apply a collation to the result of the >
operator, which is of the non-collatable data type boolean
.
4.2.11. Scalar Subqueries
A scalar subquery is an ordinary SELECT
query in parentheses that returns exactly one row with one column. (See Chapter 7 for information about writing queries.) The SELECT
query is executed and the single returned value is used in the surrounding value expression. It is an error to use a query that returns more than one row or more than one column as a scalar subquery. (But if, during a particular execution, the subquery returns no rows, there is no error; the scalar result is taken to be null.) The subquery can refer to variables from the surrounding query, which will act as constants during any one evaluation of the subquery. See also Section 9.22 for other expressions involving subqueries.
For example, the following finds the largest city population in each state:
SELECT name, (SELECT max(pop) FROM cities WHERE cities.state = states.name) FROM states;
4.2.12. Array Constructors
An array constructor is an expression that builds an array value using values for its member elements. A simple array constructor consists of the key word ARRAY
, a left square bracket [
, a list of expressions (separated by commas) for the array element values, and finally a right square bracket ]
. For example:
SELECT ARRAY[1,2,3+4]; array --------- {1,2,7} (1 row)
By default, the array element type is the common type of the member expressions, determined using the same rules as for UNION
or CASE
constructs (see Section 10.5). You can override this by explicitly casting the array constructor to the desired type, for example:
SELECT ARRAY[1,2,22.7]::integer[]; array ---------- {1,2,23} (1 row)
This has the same effect as casting each expression to the array element type individually. For more on casting, see Section 4.2.9.
Multidimensional array values can be built by nesting array constructors. In the inner constructors, the key word ARRAY
can be omitted. For example, these produce the same result:
SELECT ARRAY[ARRAY[1,2], ARRAY[3,4]]; array --------------- {{1,2},{3,4}} (1 row) SELECT ARRAY[[1,2],[3,4]]; array --------------- {{1,2},{3,4}} (1 row)
Since multidimensional arrays must be rectangular, inner constructors at the same level must produce sub-arrays of identical dimensions. Any cast applied to the outer ARRAY
constructor propagates automatically to all the inner constructors.
Multidimensional array constructor elements can be anything yielding an array of the proper kind, not only a sub-ARRAY
construct. For example:
CREATE TABLE arr(f1 int[], f2 int[]); INSERT INTO arr VALUES (ARRAY[[1,2],[3,4]], ARRAY[[5,6],[7,8]]); SELECT ARRAY[f1, f2, '{{9,10},{11,12}}'::int[]] FROM arr; array ------------------------------------------------ {{{1,2},{3,4}},{{5,6},{7,8}},{{9,10},{11,12}}} (1 row)
You can construct an empty array, but since it's impossible to have an array with no type, you must explicitly cast your empty array to the desired type. For example:
SELECT ARRAY[]::integer[]; array ------- {} (1 row)
It is also possible to construct an array from the results of a subquery. In this form, the array constructor is written with the key word ARRAY
followed by a parenthesized (not bracketed) subquery. For example:
SELECT ARRAY(SELECT oid FROM pg_proc WHERE proname LIKE 'bytea%'); array ----------------------------------------------------------------------- {2011,1954,1948,1952,1951,1244,1950,2005,1949,1953,2006,31,2412,2413} (1 row) SELECT ARRAY(SELECT ARRAY[i, i*2] FROM generate_series(1,5) AS a(i)); array ---------------------------------- {{1,2},{2,4},{3,6},{4,8},{5,10}} (1 row)
The subquery must return a single column. If the subquery's output column is of a non-array type, the resulting one-dimensional array will have an element for each row in the subquery result, with an element type matching that of the subquery's output column. If the subquery's output column is of an array type, the result will be an array of the same type but one higher dimension; in this case all the subquery rows must yield arrays of identical dimensionality, else the result would not be rectangular.
The subscripts of an array value built with ARRAY
always begin with one. For more information about arrays, see Section 8.15.
4.2.13. Row Constructors
A row constructor is an expression that builds a row value (also called a composite value) using values for its member fields. A row constructor consists of the key word ROW
, a left parenthesis, zero or more expressions (separated by commas) for the row field values, and finally a right parenthesis. For example:
SELECT ROW(1,2.5,'this is a test');
The key word ROW
is optional when there is more than one expression in the list.
A row constructor can include the syntax rowvalue
.*
, which will be expanded to a list of the elements of the row value, just as occurs when the .*
syntax is used at the top level of a SELECT
list (see Section 8.16.5). For example, if table t
has columns f1
and f2
, these are the same:
SELECT ROW(t.*, 42) FROM t; SELECT ROW(t.f1, t.f2, 42) FROM t;
Note
Before PostgreSQL 8.2, the .*
syntax was not expanded in row constructors, so that writing ROW(t.*, 42)
created a two-field row whose first field was another row value. The new behavior is usually more useful. If you need the old behavior of nested row values, write the inner row value without .*
, for instance ROW(t, 42)
.
By default, the value created by a ROW
expression is of an anonymous record type. If necessary, it can be cast to a named composite type — either the row type of a table, or a composite type created with CREATE TYPE AS
. An explicit cast might be needed to avoid ambiguity. For example:
CREATE TABLE mytable(f1 int, f2 float, f3 text); CREATE FUNCTION getf1(mytable) RETURNS int AS 'SELECT $1.f1' LANGUAGE SQL; -- No cast needed since only one getf1() exists SELECT getf1(ROW(1,2.5,'this is a test')); getf1 ------- 1 (1 row) CREATE TYPE myrowtype AS (f1 int, f2 text, f3 numeric); CREATE FUNCTION getf1(myrowtype) RETURNS int AS 'SELECT $1.f1' LANGUAGE SQL; -- Now we need a cast to indicate which function to call: SELECT getf1(ROW(1,2.5,'this is a test')); ERROR: function getf1(record) is not unique SELECT getf1(ROW(1,2.5,'this is a test')::mytable); getf1 ------- 1 (1 row) SELECT getf1(CAST(ROW(11,'this is a test',2.5) AS myrowtype)); getf1 ------- 11 (1 row)
Row constructors can be used to build composite values to be stored in a composite-type table column, or to be passed to a function that accepts a composite parameter. Also, it is possible to compare two row values or test a row with IS NULL
or IS NOT NULL
, for example:
SELECT ROW(1,2.5,'this is a test') = ROW(1, 3, 'not the same'); SELECT ROW(table.*) IS NULL FROM table; -- detect all-null rows
For more detail see Section 9.23. Row constructors can also be used in connection with subqueries, as discussed in Section 9.22.
4.2.14. Expression Evaluation Rules
The order of evaluation of subexpressions is not defined. In particular, the inputs of an operator or function are not necessarily evaluated left-to-right or in any other fixed order.
Furthermore, if the result of an expression can be determined by evaluating only some parts of it, then other subexpressions might not be evaluated at all. For instance, if one wrote:
SELECT true OR somefunc();
then somefunc()
would (probably) not be called at all. The same would be the case if one wrote:
SELECT somefunc() OR true;
Note that this is not the same as the left-to-right “short-circuiting” of Boolean operators that is found in some programming languages.
As a consequence, it is unwise to use functions with side effects as part of complex expressions. It is particularly dangerous to rely on side effects or evaluation order in WHERE
and HAVING
clauses, since those clauses are extensively reprocessed as part of developing an execution plan. Boolean expressions (AND
/OR
/NOT
combinations) in those clauses can be reorganized in any manner allowed by the laws of Boolean algebra.
When it is essential to force evaluation order, a CASE
construct (see Section 9.17) can be used. For example, this is an untrustworthy way of trying to avoid division by zero in a WHERE
clause:
SELECT ... WHERE x > 0 AND y/x > 1.5;
But this is safe:
SELECT ... WHERE CASE WHEN x > 0 THEN y/x > 1.5 ELSE false END;
A CASE
construct used in this fashion will defeat optimization attempts, so it should only be done when necessary. (In this particular example, it would be better to sidestep the problem by writing y > 1.5*x
instead.)
CASE
is not a cure-all for such issues, however. One limitation of the technique illustrated above is that it does not prevent early evaluation of constant subexpressions. As described in Section 36.6, functions and operators marked IMMUTABLE
can be evaluated when the query is planned rather than when it is executed. Thus for example
SELECT CASE WHEN x > 0 THEN x ELSE 1/0 END FROM tab;
is likely to result in a division-by-zero failure due to the planner trying to simplify the constant subexpression, even if every row in the table has x > 0
so that the ELSE
arm would never be entered at run time.
While that particular example might seem silly, related cases that don't obviously involve constants can occur in queries executed within functions, since the values of function arguments and local variables can be inserted into queries as constants for planning purposes. Within PL/pgSQL functions, for example, using an IF
-THEN
-ELSE
statement to protect a risky computation is much safer than just nesting it in a CASE
expression.
Another limitation of the same kind is that a CASE
cannot prevent evaluation of an aggregate expression contained within it, because aggregate expressions are computed before other expressions in a SELECT
list or HAVING
clause are considered. For example, the following query can cause a division-by-zero error despite seemingly having protected against it:
SELECT CASE WHEN min(employees) > 0 THEN avg(expenses / employees) END FROM departments;
The min()
and avg()
aggregates are computed concurrently over all the input rows, so if any row has employees
equal to zero, the division-by-zero error will occur before there is any opportunity to test the result of min()
. Instead, use a WHERE
or FILTER
clause to prevent problematic input rows from reaching an aggregate function in the first place.