Trying to execute this query but getting 'operation must use an updatable query'. Only problem I can see is that the SELECT may not return a row. UPDATE tblFlatPkgDates As FD. In table #1 columns address and phone2 is empty and columns gender and birthdate values is same as table #2. How can I read data from table #2 and update address and phone2 in table #1 with values from table #2 address and phone columns when gender and birthdate is the same in each row? For example: this is some data in Table #1.
Delta Lake supports several statements to facilitate deleting data from and updating data in Delta tables.
Delete from a table
You can remove data that matches a predicate from a Delta table. For instance, to delete all events from before 2017
, you can run the following:
Note
The Python API is available in Databricks Runtime 6.1 and above.
Note
The Scala API is available in Databricks Runtime 6.0 and above.
Note
The Java API is available in Databricks Runtime 6.0 and above.
See the API reference for details.
Important
delete
removes the data from the latest version of the Delta table but does not remove it from the physical storage until the old versions are explicitly vacuumed. See vacuum for details.
Tip
When possible, provide predicates on the partition columns for a partitioned Delta table as such predicates can significantly speed up the operation.
Update a table
You can update data that matches a predicate in a Delta table. For example, to fix a spelling mistake in the eventType
, you can run the following:
Note
The Python API is available in Databricks Runtime 6.1 and above.
Note
The Scala API is available in Databricks Runtime 6.0 and above.
Note
The Scala API is available in Databricks Runtime 6.0 and above.
See the API reference for details.
Tip
Similar to delete, update operations can get a significant speedup with predicates on partitions.
Upsert into a table using merge
You can upsert data from a source table, view, or DataFrame into a target Delta table using the merge
operation. This operation is similar to the SQL MERGEINTO
command but has additional support for deletes and extra conditions in updates, inserts, and deletes.
Suppose you have a Spark DataFrame that contains new data for events with eventId
. Some of these events may already be present in the events
table. To merge the new data into the events
table, you want to update the matching rows (that is, eventId
already present) and insert the new rows (that is, eventId
not present). You can run the following:
For syntax details, see
- Databricks Runtime 7.x: MERGE INTO (Delta Lake on Databricks)
- Databricks Runtime 5.5 LTS and 6.x: Merge Into (Delta Lake on Databricks)
See the API reference for Scala, Java, and Python syntax details.
Operation semantics
Here is a detailed description of the merge
programmatic operation.
There can be any number of
whenMatched
andwhenNotMatched
clauses.Note
In Databricks Runtime 7.2 and below,
merge
can have at most 2whenMatched
clauses and at most 1whenNotMatched
clause.whenMatched
clauses are executed when a source row matches a target table row based on the match condition. These clauses have the following semantics.whenMatched
clauses can have at most onupdate
and onedelete
action. Theupdate
action inmerge
only updates the specified columns (similar to theupdate
operation) of the matched target row. Thedelete
action deletes the matched row.Each
whenMatched
clause can have an optional condition. If this clause condition exists, theupdate
ordelete
action is executed for any matching source-target row pair row only when when the clause condition is true.If there are multiple
whenMatched
clauses, then they are evaluated in order they are specified (that is, the order of the clauses matter). AllwhenMatched
clauses, except the last one, must have conditions.If both
whenMatched
clauses have conditions and neither of the conditions are true for a matching source-target row pair, then the matched target row is left unchanged.To update all the columns of the target Delta table with the corresponding columns of the source dataset, use
whenMatched(...).updateAll()
. This is equivalent to:for all the columns of the target Delta table. Therefore, this action assumes that the source table has the same columns as those in the target table, otherwise the query throws an analysis error.
Note
This behavior changes when automatic schema migration is enabled. See Automatic schema evolution for details.
whenNotMatched
clauses are executed when a source rows does not match any target row based on the match condition. These clauses have the following semantics.whenNotMatched
clauses can have only theinsert
action. The new row is generated based on the specified column and corresponding expressions. You do not need to specify all the columns in the target table. For unspecified target columns,NULL
is inserted.Note
In Databricks Runtime 6.5 and below, you must provide all the columns in the target table for the
INSERT
action.Each
whenNotMatched
clause can have an optional condition. If the clause condition is present, a source row is inserted only if that condition is true for that row. Otherwise, the source column is ignored.If there are multiple
whenNotMatched
clauses, then they are evaluated in order they are specified (that is, the order of the clauses matter). AllwhenNotMatched
clauses, except the last one, must have conditions.To insert all the columns of the target Delta table with the corresponding columns of the source dataset, use
whenNotMatched(...).insertAll()
. This is equivalent to:for all the columns of the target Delta table. Therefore, this action assumes that the source table has the same columns as those in the target table, otherwise the query throws an analysis error.
Note
This behavior changes when automatic schema migration is enabled. See Automatic schema evolution for details.
Important
A merge
operation can fail if multiple rows of the source dataset match and attempt to update the same rows of the target Delta table. According to the SQL semantics of merge, such an update operation is ambiguous as it is unclear which source row should be used to update the matched target row. You can preprocess the source table to eliminate the possibility of multiple matches. See the Change data capture example—it preprocesses the change dataset (that is, the source dataset) to retain only the latest change for each key before applying that change into the target Delta table.
Note
Update Table Select Another Table Sql Unbound Rows
In Databricks Runtime 7.3 LTS and above, multiple matches are allowed when matches are unconditionally deleted (since unconditional delete is not ambiguous even if there are multiple matches).
Schema validation
merge
automatically validates that the schema of the data generated by insert and update expressions are compatible with the schema of the table. It uses the following rules to determine whether the merge
operation is compatible:
- For
update
andinsert
actions, the specified target columns must exist in the target Delta table. - For
updateAll
andinsertAll
actions, the source dataset must have all the columns of the target Delta table. The source dataset can have extra columns and they are ignored. - For all actions, if the data type generated by the expressions producing the target columns are different from the corresponding columns in the target Delta table,
merge
tries to cast them to the types in the table.
Automatic schema evolution
Note
Schema evolution in merge
is available in Databricks Runtime 6.6 and above.
By default, updateAll
and insertAll
assign all the columns in the target Delta table with columns of the same name from the source dataset. Any columns in the source dataset that don’t match columns in the target table are ignored. However, in some use cases, it is desirable to automatically add source columns to the target Delta table. To automatically update the table schema during a merge
operation with updateAll
and insertAll
(at least one of them), you can set the Spark session configuration spark.databricks.delta.schema.autoMerge.enabled
to true
before running the merge
operation.
Note
- Schema evolution occurs only when there is either an
updateAll
or aninsertAll
action, or both. update
andinsert
actions cannot explicitly refer to target columns that do not already exist in the target table (even it there areupdateAll
orinsertAll
as one of the clauses). See the examples below.
Note
In Databricks Runtime 7.4 and below, merge
supports schema evolution of only top-level columns, and not of nested columns.
Here are a few examples of the effects of merge
operation with and without schema evolution.
Columns | Query (in Scala) | Behavior without schema evolution (default) | Behavior with schema evolution |
---|---|---|---|
Target columns: Source columns: | The table schema remains unchanged; only columns key , value are updated/inserted. | The table schema is changed to (key,value,newValue) . updateAll updates columns value and newValue , and insertAll inserts rows (key,value,newValue) . | |
Target columns: Source columns: | updateAll and insertAll actions throw an error because the target column oldValue is not in the source. | The table schema is changed to (key,oldValue,newValue) . updateAll updates columns key and newValue leaving oldValue unchanged, and insertAll inserts rows (key,NULL,newValue) (that is, oldValue is inserted as NULL ). | |
Target columns: Source columns: | update throws an error because column newValue does not exist in the target table. | update still throws an error because column newValue does not exist in the target table. | |
Target columns: Source columns: | insert throws an error because column newValue does not exist in the target table. | insert still throws an error as column newValue does not exist in the target table. |
Performance tuning
You can reduce the time taken by merge using the following approaches:
Reduce the search space for matches: By default, the
merge
operation searches the entire Delta table to find matches in the source table. One way to speed upmerge
is to reduce the search space by adding known constraints in the match condition. For example, suppose you have a table that is partitioned bycountry
anddate
and you want to usemerge
to update information for the last day and a specific country. Adding the conditionwill make the query faster as it looks for matches only in the relevant partitions. Furthermore, it will also reduce the chances of conflicts with other concurrent operations. See Concurrency control for more details.
Compact files: If the data is stored in many small files, reading the data to search for matches can become slow. You can compact small files into larger files to improve read throughput. See Compact files for details.
Control the shuffle partitions for writes: The
merge
operation shuffles data multiple times to compute and write the updated data. The number of tasks used to shuffle is controlled by the Spark session configurationspark.sql.shuffle.partitions
. Setting this parameter not only controls the parallelism but also determines the number of output files. Increasing the value increases parallelism but also generates a larger number of smaller data files.
- Enable optimized writes: For partitioned tables,
merge
can produce a much larger number of small files than the number of shuffle partitions. This is because every shuffle task can write multiple files in multiple partitions, and can become a performance bottleneck. You can optimize this by enabling Optimized Writes.
Merge examples
Here are a few examples on how to use merge
in different scenarios.
In this section:
Update Table Select Another Table Sql Unbound Syntax
A common ETL use case is to collect logs into Delta table by appending them to a table. However, often the sources can generate duplicate log records and downstream deduplication steps are needed to take care of them. With merge
, you can avoid inserting the duplicate records.
Note
The dataset containing the new logs needs to be deduplicated within itself. By the SQL semantics of merge, it matches and deduplicates the new data with the existing data in the table, but if there is duplicate data within the new dataset, it is inserted. Hence, deduplicate the new data before merging into the table.
If you know that you may get duplicate records only for a few days, you can optimized your query further by partitioning the table by date, and then specifying the date range of the target table to match on.
This is more efficient than the previous command as it looks for duplicates only in thelast 7 days of logs, not the entire table. Furthermore, you can use this insert-only merge with Structured Streaming to perform continuous deduplication of the logs.
- In a streaming query, you can use merge operation in
foreachBatch
to continuously write any streaming data to a Delta table with deduplication. See the following streaming example for more information onforeachBatch
. - In another streaming query, you can continuously read deduplicated data from this Delta table. This is possible because an insert-only merge only appends new data to the Delta table.
Note
Insert-only merge is optimized to only append data in Databricks Runtime 6.2 and above. In Databricks Runtime 6.1 and below, writes from insert-only merge operations cannot be read as a stream.
Another common operation is SCD Type 2, which maintains history of all changes made to each key in a dimensional table. Such operations require updating existing rows to mark previous values of keys as old, and the inserting the new rows as the latest values. Given a source table with updates and the target table with the dimensional data, SCD Type 2 can be expressed with merge
.
Here is a concrete example of maintaining the history of addresses for a customer along with the active date range of each address. When a customer’s address needs to be updated, you have to mark the previous address as not the current one, update its active date range, and add the new address as the current one.
SCD Type 2 using merge notebook
Similar to SCD, another common use case, often called change data capture (CDC), is to applyall data changes generated from an external database into a Delta table. In other words, a setof updates, deletes, and inserts applied to an external table needs to be applied to a Delta table.You can do this using merge
as follows.
Write change data using MERGE notebook
You can use a combination of merge
and foreachBatch
(see foreachbatch for more information) to write complex upserts from a streaming query into a Delta table. For example:
- Write streaming aggregates in Update Mode: This is much more efficient than Complete Mode.
- Write a stream of database changes into a Delta table: The merge query for writing change data can be used in
foreachBatch
to continuously apply a stream of changes to a Delta table. - Write a stream data into Delta table with deduplication: The insert-only merge query for deduplication can be used in
foreachBatch
to continuously write data (with duplicates) to a Delta table with automatic deduplication.
Note
- Make sure that your
merge
statement insideforeachBatch
is idempotent as restartsof the streaming query can apply the operation on the same batch of data multiple times. - When
merge
is used inforeachBatch
, the input data rate of the streaming query(reported throughStreamingQueryProgress
and visible in the notebook rate graph) may be reportedas a multiple of the actual rate at which data is generated at the source. This is becausemerge
reads the input data multiple times causing the input metrics to be multiplied. If this is a bottleneck, you can cache the batch DataFrame beforemerge
and then uncache it aftermerge
.
Write streaming aggregates in update mode using merge and foreachBatch notebook
Syntax
Single-table syntax:
Multiple-table syntax:
Contents
- Description
Description
For the single-table syntax, the UPDATE
statement updatescolumns of existing rows in the named table with new values. TheSET
clause indicates which columns to modify and the valuesthey should be given. Each value can be given as an expression, or the keywordDEFAULT
to set a column explicitly to its default value. TheWHERE
clause, if given, specifies the conditions that identifywhich rows to update. With no WHERE
clause, all rows areupdated. If the ORDER BY clause is specified, the rows areupdated in the order that is specified. The LIMIT clauseplaces a limit on the number of rows that can be updated.
MariaDB starting with 10.0
The PARTITION clause was introduced in MariaDB 10.0. See Partition Pruning and Selection for details.
Until MariaDB 10.3.2, for the multiple-table syntax, UPDATE
updates rows in eachtable named in table_references that satisfy the conditions. In this case,ORDER BY and LIMIT cannot be used. This restriction was lifted in MariaDB 10.3.2 and both clauses can be used with multiple-table updates. An UPDATE
can also reference tables which are located in different databases; see Identifier Qualifiers for the syntax.
where_condition
is an expression that evaluates to true foreach row to be updated.
table_references
and where_condition
are asspecified as described in SELECT.
For single-table updates, assignments are evaluated in left-to-right order, while for multi-table updates, there is no guarantee of a particular order. If the SIMULTANEOUS_ASSIGNMENT
sql_mode (available from MariaDB 10.3.5) is set, UPDATE statements evaluate all assignments simultaneously.
You need the UPDATE
privilege only for columns referenced inan UPDATE
that are actually updated. You need only theSELECT privilege for any columns that are read butnot modified. See GRANT.
The UPDATE
statement supports the following modifiers:
- If you use the
LOW_PRIORITY
keyword, execution of theUPDATE
is delayed until no other clients are reading from the table. This affects only storage engines that use only table-level locking (MyISAM, MEMORY, MERGE). See HIGH_PRIORITY and LOW_PRIORITY clauses for details. - If you use the
IGNORE
keyword, the update statement does not abort even if errors occur during the update. Rows for which duplicate-key conflicts occur are not updated. Rows for which columns are updated to values that would cause data conversion errors are updated to the closest valid values instead.
UPDATE Statements With the Same Source and Target
MariaDB starting with 10.3.2
From MariaDB 10.3.2, UPDATE statements may have the same source and target.
For example, given the following table:
Until MariaDB 10.3.1, the following UPDATE statement would not work:
From MariaDB 10.3.2, the statement executes successfully:
Example
Single-table syntax:
Update Table Select Another Table Sql Unbound Server
Multiple-table syntax: