Commit graph

1037 commits

Author SHA1 Message Date
Sergei Golubchik
ba01c2aaf0 Merge branch '11.4' into 11.7
* rpl.rpl_system_versioning_partitions updated for MDEV-32188
* innodb.row_size_error_log_warnings_3 changed error for MDEV-33658
  (checks are done in a different order)
2025-02-06 16:46:36 +01:00
Sergei Golubchik
7d657fda64 Merge branch '10.11 into 11.4 2025-01-30 12:01:11 +01:00
Sergei Golubchik
e69f8cae1a Merge branch '10.6' into 10.11 2025-01-30 11:55:13 +01:00
Sergei Golubchik
03d2328785 MDEV-35944 DELETE fails to notice transaction abort, violating ACID
Process errors of read_record().

Also, add an assert that Marko requested
2025-01-29 10:43:29 +01:00
Marko Mäkelä
98dbe3bfaf Merge 10.5 into 10.6 2025-01-20 09:57:37 +02:00
Sergei Golubchik
a69da0c31e MDEV-19761 Before Trigger not processed for Not Null Columns if no explicit value and no DEFAULT
it's incorrect to zero out table->triggers->extra_null_bitmap
before a statement, because if insert uses an explicit field list
and omits a field that has no default value, the field should
get NULL implicitly. So extra_null_bitmap should have 1s for all
fields that have no defaults

* create extra_null_bitmap_init and initialize it as above
* copy extra_null_bitmap_init to extra_null_bitmap for inserts
* still zero out extra_null_bitmap for updates/deletes where
  all fields definitely have a value
* make not_null_fields_have_null_values() to send
  ER_NO_DEFAULT_FOR_FIELD for fields with no default and no value,
  otherwise creation of a trigger with an empty body would change the
  error message
2025-01-17 23:42:56 +01:00
Oleksandr Byelkin
0d35fe6e57 MDEV-35326: Memory Leak in init_io_cache_ext upon SHUTDOWN
The problems were that:
1) resources was freed "asimetric" normal execution in send_eof,
 in case of error in destructor.
2) destructor was not called in case of SP for result objects.
(so if the last SP execution ended with error resorces was not
freeded on reinit before execution (cleanup() called before next
execution) and destructor also was not called due to lack of
delete call for the object)

Result cleanup() renamed to reset_for_next_ps_execution() to better
reflect function().

All result method revised and freeing resources made "symetric".

Destructor of result object called for SP.

Added skipped invalidation in case of error in insert.

Removed misleading naming of reset(thd) (could be mixed with
with reset()).
2025-01-13 10:04:27 +01:00
Marko Mäkelä
15700f54c2 Merge 11.4 into 11.7 2025-01-09 09:41:38 +02:00
Marko Mäkelä
17f01186f5 Merge 10.11 into 11.4 2025-01-09 07:58:08 +02:00
Marko Mäkelä
3f914afd3a Merge 10.6 into 10.11 2025-01-02 12:39:56 +02:00
Yuchen Pei
671f80c738
Merge branch '10.5' into 10.6 2024-12-17 11:06:09 +11:00
Dmitry Shulga
54c1031b74 MDEV-34958: after Trigger doesn't work correctly with bulk insert
This bug has the same nature as the issues
  MDEV-34718: Trigger doesn't work correctly with bulk update
  MDEV-24411: Trigger doesn't work correctly with bulk insert

To fix the issue covering all use cases, resetting the thd->bulk_param
temporary to the value nullptr before invoking triggers and restoring
its original value on finishing execution of a trigger is moved to the method
  Table_triggers_list::process_triggers
that be invoked ultimately for any kind of triggers.
2024-12-13 16:19:39 +07:00
Marko Mäkelä
33907f9ec6 Merge 11.4 into 11.7 2024-12-02 17:51:17 +02:00
Marko Mäkelä
2719cc4925 Merge 10.11 into 11.4 2024-12-02 11:35:34 +02:00
Marko Mäkelä
3d23adb766 Merge 10.6 into 10.11 2024-11-29 13:43:17 +02:00
Marko Mäkelä
7d4077cc11 Merge 10.5 into 10.6 2024-11-29 12:37:46 +02:00
Brandon Nesterenko
dbfee9fc2b MDEV-34348: Consolidate cmp function declarations
Partial commit of the greater MDEV-34348 scope.
MDEV-34348: MariaDB is violating clang-16 -Wcast-function-type-strict

The functions queue_compare, qsort2_cmp, and qsort_cmp2
all had similar interfaces, and were used interchangable
and unsafely cast to one another.

This patch consolidates the functions all into the
qsort_cmp2 interface.

Reviewed By:
============
Marko Mäkelä <marko.makela@mariadb.com>
2024-11-23 08:14:22 -07:00
Oleksandr Byelkin
b12ff287ec Merge branch '11.6' into 11.7 2024-11-10 19:22:21 +01:00
Oleksandr Byelkin
9e1fb104a3 MariaDB 11.4.4 release
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEF39AEP5WyjM2MAMF8WVvJMdM0dgFAmck77AACgkQ8WVvJMdM
 0dgccQ/+Lls8fWt4D+gMPP7x+drJSO/IE/gZFt3ugbWF+/p3B2xXAs5AAE83wxEh
 QSbp4DCkb/9PnuakhLmzg0lFbxMUlh4rsJ1YyiuLB2J+YgKbAc36eQQf+rtYSipd
 DT5uRk36c9wOcOXo/mMv4APEvpPXBIBdIL4VvpKFbIOE7xT24Sp767zWXdXqrB1f
 JgOQdM2ct+bvSPC55oZ5p1kqyxwvd6K6+3RB3CIpwW9zrVSLg7enT3maLjj/761s
 jvlRae+Cv+r+Hit9XpmEH6n2FYVgIJ3o3WhdAHwN0kxKabXYTg7OCB7QxDZiUHI9
 C/5goKmKaPB1PCQyuTQyLSyyK9a8nPfgn6tqw/p/ZKDQhKT9sWJv/5bSWecrVndx
 LLYifSTrFC/eXLzgPvCnNv/U8SjsZaAdMIKS681+qDJ0P5abghUIlGnMYTjYXuX1
 1B6Vrr0bdrQ3V1CLB3tpkRjpUvicrsabtuAUAP65QnEG2G9UJXklOer+DE291Gsl
 f1I0o6C1zVGAOkUUD3QEYaHD8w7hlvyfKme5oXKUm3DOjaAar5UUKLdr6prxRZL4
 ebhmGEy42Mf8fBYoeohIxmxgvv6h2Xd9xCukgPp8hFpqJGw8abg7JNZTTKH4h2IY
 J51RpD10h4eoi6WRn3opEcjexTGvZ+xNR7yYO5WxWw6VIre9IUA=
 =s+WW
 -----END PGP SIGNATURE-----

Merge tag '11.4' into 11.6

MariaDB 11.4.4 release
2024-11-08 07:17:00 +01:00
Sergei Golubchik
f2512c0fa8 cleanup: prepare_for_insert() -> prepare_for_modify()
make handler::prepare_for_insert() to be called to prepare
the handler for writes, INSERT/UPDATE/DELETE.
2024-11-05 14:00:49 -08:00
Sergei Golubchik
44c6328cbb cleanup: thd->alloc<>() and thd->calloc<>()
create templates

  thd->alloc<X>(n) to use instead of (X*)thd->alloc(sizeof(X)*n)

and the same for thd->calloc(). By the default the type is char,
so old usage of thd->alloc(size) works too.
2024-11-05 14:00:48 -08:00
Sergei Golubchik
eff16d7593 Revert "MDEV-15458 Segfault in heap_scan() upon UPDATE after ADD SYSTEM VERSIONING"
This partially reverts 43623f04a9

Engines have to set ::position() after ::write_row(), otherwise
the server won't be able to refer to the row just inserted.
This is important for high-level indexes.

heap part isn't reverted, so heap doesn't support high-level indexes.
to fix this, it'll need info->lastpos in addition to info->current_ptr
2024-11-05 14:00:48 -08:00
Oleksandr Byelkin
c770bce898 Merge branch '11.2' into 11.4 2024-10-30 15:11:17 +01:00
Oleksandr Byelkin
69d033d165 Merge branch '10.11' into 11.2 2024-10-29 16:42:46 +01:00
Oleksandr Byelkin
3d0fb15028 Merge branch '10.6' into 10.11 2024-10-29 15:24:38 +01:00
Oleksandr Byelkin
f00711bba2 Merge branch '10.5' into 10.6 2024-10-29 14:20:03 +01:00
Yuchen Pei
b8c2bd9f69
MDEV-35249 Fix regression caused by MDEV-34447
MDEV-34447 Removed setting first_cond_optimization to 0 in update and
delete when leaf_tables_saved. This can cause problems when two ps
executions of an update go through different paths, where the first ps
execution goes through single table update only and the second ps
execution also goes through multi table update. When this happens, the
first_cond_optimization of the outer query is not set to false during
the first ps execution because optimize() is not called for the outer
query. But then the second ps execution will call optimize() on the
outer query, which with first_cond_optimization==true trips the 2nd ps
mem leak detection.

This is not a problem in higher version as both executions go through
multi table updates, possibly due to MDEV-28883.

We fix this problem by restoring the FALSE assignments to
first_cond_optimization.
2024-10-25 18:03:40 +11:00
Yuchen Pei
4b6922a315
MDEV-25008: UPDATE/DELETE: Cost-based choice IN->EXISTS vs Materialization
Single-table UPDATE/DELETE didn't provide outer_lookup_keys value for
subqueries. This didn't allow to make a meaningful choice between
IN->EXISTS and Materialization strategies for subqueries.

Fix this:
* Make UPDATE/DELETE save Sql_cmd_dml::scanned_rows,
* Then, subquery's JOIN::choose_subquery_plan() can fetch it from
there for outer_lookup_keys

Details:
UPDATE/DELETE now calls select_lex->optimize_unflattened_subqueries()
twice, like SELECT does (first call optimize_constant_subquries() in
JOIN::optimize_inner(), then call optimize_unflattened_subqueries() in
JOIN::optimize_stage2()):
1. Call with const_only=true before any optimizations. This allows
range optimizer and others to use the values of cheap const
subqueries.
2. Call it with const_only=false after range optimizer, partition
pruning, etc. outer_lookup_keys value is provided, so it's possible to
pick a good subquery strategy.

Note: PROTECT_STATEMENT_MEMROOT requires that first SP execution
performs subquery optimization for all subqueries, even for degenerate
query plans like "Impossible WHERE". Due to that, we ensure that the
call to optimize_unflattened_subqueries (with const_only=false) even
for degenerate query plans still happens, as was the case before this
change.
2024-10-23 23:51:24 +11:00
Alexander Barkov
a931da82fa MDEV-34123 CONCAT Function Returns Unexpected Empty Set in Query
Search conditions were evaluated using val_int(), which was wrong.
Fixing the code to use val_bool() instead.

Details:
- Adding a new item_base_t::IS_COND flag which marks Items used
  as <search condition> in WHERE, HAVING, JOIN ON, CASE WHEN clauses.
  The flag is at the parse time.
  These expressions must be evaluated using val_bool() rather than val_int().

  Note, the optimizer creates more Items which are used as search conditions.
  Most of these items are not marked with IS_COND yet. This is OK for now,
  but eventually these Items can also be fixed to have the flag.

- Adding a method Item::is_cond() which tests if the Item has the IS_COND flag.

- Implementing Item_cache_bool. It evaluates the cached expression using
  val_bool() rather than val_int().
  Overriding Type_handler_bool::Item_get_cache() to create Item_cache_bool.

- Implementing Item::save_bool_in_field(). It uses val_bool() rather than
  val_int() to evaluate the expression.

- Implementing Type_handler_bool::Item_save_in_field()
  using Item::save_bool_in_field().

- Fixing all Item_bool_func descendants to implement a virtual val_bool()
  rather than a virtual val_int().

- To find places where val_int() should be fixed to val_bool(), a few
  DBUG_ASSERT(!is_cond()) where added into val_int() implementations
  of selected (most frequent) classes:

  Item_field
  Item_str_func
  Item_datefunc
  Item_timefunc
  Item_datetimefunc
  Item_cache_bool
  Item_bool_func
  Item_func_hybrid_field_type
  Item_basic_constant descendants

- Fixing all places where DBUG_ASSERT() happened during an "mtr" run
  to use val_bool() instead of val_int().
2024-10-08 11:58:46 +02:00
Yuchen Pei
ba7088d462
Merge '11.4' into 11.6 2024-10-03 15:59:20 +10:00
Yuchen Pei
1f7f406b7b
Merge branch '11.2' into 11.4 2024-09-18 11:27:53 +10:00
Yuchen Pei
cfa9784edb
Merge branch '10.11' into 11.2 2024-09-18 10:25:16 +10:00
Sergei Petrunia
5cd3fa81ef Merge 10.11 -> 11.2 2024-09-17 12:34:33 +03:00
Yuchen Pei
d002b1f503
Merge branch '10.6' into 10.11 2024-09-09 11:34:19 +10:00
Yuchen Pei
60b93cdd30
Merge branch '10.5' into 10.6 2024-09-06 13:52:57 +10:00
Yuchen Pei
e886c2ba02
MDEV-34757 Check leaf_tables_saved in partition pruning in UPDATE and DELETE 2024-09-06 11:41:59 +10:00
Yuchen Pei
2c3e07df47
MDEV-34447: Memory leakage is detected on running the test main.ps against the server 11.1
The memory leak happened on second execution of a prepared statement
that runs UPDATE statement with correlated subquery in right hand side of
the SET clause. In this case, invocation of the method
  table->stat_records()
could return the zero value that results in going into the 'if' branch
that handles impossible where condition. The issue is that this condition
branch missed saving of leaf tables that has to be performed as first
condition optimization activity. Later the PS statement memory root
is marked as read only on finishing first time execution of the prepared
statement. Next time the same statement is executed it hits the assertion
on attempt to allocate a memory on the PS memory root marked as read only.
This memory allocation takes place by the sequence of the following
invocations:
 Prepared_statement::execute
  mysql_execute_command
   Sql_cmd_dml::execute
    Sql_cmd_update::execute_inner
     Sql_cmd_update::update_single_table
      st_select_lex::save_leaf_tables
       List<TABLE_LIST>::push_back

To fix the issue, add the flag SELECT_LEX::leaf_tables_saved to control
whether the method SELECT_LEX::save_leaf_tables() has to be called or
it has been already invoked and no more invocation required.

Similar issue could take place on running the DELETE statement with
the LIMIT clause in PS/SP mode. The reason of memory leak is the same as for
UPDATE case and be fixed in the same way.
2024-09-06 11:41:58 +10:00
Oleksandr Byelkin
342fa29615 Merge branch '11.4' into 11.5 2024-08-21 11:52:54 +02:00
Oleksandr Byelkin
eb70e0d6e2 Merge branch '11.2' into 11.4 2024-08-21 09:30:54 +02:00
Oleksandr Byelkin
6197e6abc4 Merge branch '10.11' into 11.2 2024-08-21 07:58:46 +02:00
Oleksandr Byelkin
70afc62750 Merge branch '10.6' into 10.11 2024-08-20 10:00:39 +02:00
Oleksandr Byelkin
fc5772ce17 Merge branch '10.5' into 10.6 2024-08-20 09:11:34 +02:00
Dmitry Shulga
ba5482ffc2 MDEV-34718: Trigger doesn't work correctly with bulk update
Running an UPDATE statement in PS mode and having positional
parameter(s) bound with an array of actual values (that is
prepared to be run in bulk mode) results in incorrect behaviour
in presence of on update trigger that also executes an UPDATE
statement. The same is true for handling a DELETE statement in
presence of on delete trigger. Typically, the visible effect of
such incorrect behaviour is expressed in a wrong number of
updated/deleted rows of a target table. Additionally, in case UPDATE
statement, a number of modified rows and a state message returned
by a statement contains wrong information about a number of modified rows.

The reason for incorrect number of updated/deleted rows is that
a data structure used for binding positional argument with its
actual values is stored in THD (this is thd->bulk_param) and reused
on processing every INSERT/UPDATE/DELETE statement. It leads to
consuming actual values bound with top-level UPDATE/DELETE statement
by other DML statements used by triggers' body.

To fix the issue, reset the thd->bulk_param temporary to the value
nullptr before invoking triggers and restore its value on finishing
its execution.

The second part of the problem relating with wrong value of affected
rows reported by Connector/C API is caused by the fact that diagnostics
area is reused by an original DML statement and a statement invoked
by a trigger. This fact should be take into account on finalizing a
state of diagnostics area on completion running of a statement.

Important remark: in case the macros DBUG_OFF is on, call of the method
  Diagnostics_area::reset_diagnostics_area()
results in reset of the data members
  m_affected_rows, m_statement_warn_count.
Values of these data members of the class Diagnostics_area are used on
sending OK and EOF messages. In case DML statement is executed in PS bulk
mode such resetting results in sending wrong result values to a client
for affected rows in case the DML statement fires a triggers. So, reset
these data members only in case the current statement being processed
is not run in bulk mode.
2024-08-19 12:13:43 +07:00
Dmitry Shulga
73b58ac487 MDEV-34649: Memory leaks on running DELETE statement in PS mode with positional parameters
Re-design of a way for handling the DELETE statement introduced by
the task MDEV-28883, added regression caused by missing reset of
the data member current_select->first_cond_optimization on handling
the DELETE statement that results in a memory leaks on second execution
of the same DELETE statement in PS mode.

To fix memory leaks, added set of the data member
  current_select->first_cond_optimization
to the value false on finishing execution of DELETE statement.
2024-08-01 13:07:12 +07:00
Alexander Barkov
4e805aed85 Merge remote-tracking branch 'origin/11.4' into 11.5 2024-07-10 12:17:09 +04:00
Alexander Barkov
5fb07d942b Merge remote-tracking branch 'origin/11.2' into 11.4 2024-07-09 21:45:37 +04:00
Dmitry Shulga
e012407397 MDEV-34447: Memory leakage is detected on running the test main.ps against the server 11.1
The memory leak happened on second execution of a prepared statement
that runs UPDATE statement with correlated subquery in right hand side of
the SET clause. In this case, invocation of the method
  table->stat_records()
could return the zero value that results in going into the 'if' branch
that handles impossible where condition. The issue is that this condition
branch missed saving of leaf tables that has to be performed as first
condition optimization activity. Later the PS statement memory root
is marked as read only on finishing first time execution of the prepared
statement. Next time the same statement is executed it hits the assertion
on attempt to allocate a memory on the PS memory root marked as read only.
This memory allocation takes place by the sequence of the following
invocations:
 Prepared_statement::execute
  mysql_execute_command
   Sql_cmd_dml::execute
    Sql_cmd_update::execute_inner
     Sql_cmd_update::update_single_table
      st_select_lex::save_leaf_tables
       List<TABLE_LIST>::push_back

To fix the issue, add the flag SELECT_LEX::leaf_tables_saved to control
whether the method SELECT_LEX::save_leaf_tables() has to be called or
it has been already invoked and no more invocation required.

Similar issue could take place on running the DELETE statement with
the LIMIT clause in PS/SP mode. The reason of memory leak is the same as for
UPDATE case and be fixed in the same way.
2024-07-02 18:40:11 +07:00
Oleksandr Byelkin
dd7d9d7fb1 Merge branch '11.4' into 11.5 2024-05-23 17:01:43 +02:00
Oleksandr Byelkin
99b370e023 Merge branch '11.2' into 11.4 2024-05-21 19:38:51 +02:00
Sergei Golubchik
ea6975b1f1 MDEV-30366 Permit bulk implementation to return ALL individual results
COM_STMT_BULK_STMT new flag to server to returns all unitary results
2024-04-22 14:57:29 +02:00