Commit graph

75 commits

Author SHA1 Message Date
Sergei Golubchik
32e6f8ff2e cleanup: remove unconditional #ifdef's 2024-11-05 14:00:47 -08:00
Sergei Petrunia
368dd22a81 MDEV-31223: UBSan error: sql_select.h:969:7: runtime error: load of value...
In Loose_scan_opt::save_to_position, initialize
POSITION::firstmatch_with_join_buf.
2023-05-09 13:09:00 +03:00
Sergei Petrunia
d61bc94fa0 MDEV-30659 Server crash on EXPLAIN SELECT/SELECT on table with engine Aria for LooseScan Strategy
Amended patch from Monty:

The issue was that Loose_scan_opt::save_to_position() did not take
into account records_out from best_access_path()

Make sure that POSITION object filled by Loose_scan_opt::save_to_position()
has records_out not higher than any other possible access method.
2023-02-21 15:27:23 +03:00
Sergei Petrunia
6c4076fac4 MDEV-30032: EXPLAIN FORMAT=JSON output: part #2: print 'loops'. 2023-02-03 11:22:17 +03:00
Monty
2eb6b801ad Fixes some issues in Firstmatch optimization
Allows FirstMatch to handle the case where the fanout of firstmatch tables
is already less than 1.
Also Fixes LooseScan strategy to set position->{records_init, records_out}
(They were set to 0 which also caused assertion failures)

Author: Sergei Petrunia <sergey@mariadb.com>
Reviewer: Monty
2023-02-02 23:58:58 +03:00
Monty
d9d0e78039 Add limits for how many IO operations a table access will do
This solves the current problem in the optimizer
- SELECT FROM big_table
  - SELECT from small_table where small_table.eq_ref_key=big_table.id

The old code assumed that each eq_ref access will cause an IO.
As the cost of IO is high, this dominated the cost for the later table
which caused the optimizer to prefer table scans + join cache over
index reads.

This patch fixes this issue by limit the number of expected IO calls,
for rows and index separately, to the size of the table or index or
the number of accesses that we except in a range for the index.

The major changes are:

- Adding a new structure ALL_READ_COST that is mainly used in
  best_access_path() to hold the costs parts of the cost we are
  calculating. This allows us to limit the number of IO when multiplying
  the cost with the previous row combinations.
- All storage engine cost functions are changed to return IO_AND_CPU_COST.
  The virtual cost functions should now return in IO_AND_CPU_COST.io
  the number of disk blocks that will be accessed instead of the cost
  of the access.
- We are not limiting the io_blocks for table or index scans as we
  assume that engines may not store these in the 'hot' part of the
  cache. Table and index scan also uses much less IO blocks than
  key accesses, so the original issue is not as critical with scans.

Other things:
  OPT_RANGE now holds a 'Cost_estimate cost' instead a lot of different
  costs. All the old costs, like index_only_read, can be extracted
  from 'cost'.
- Added to the start of some functions 'handler *file= table->file'
  to shorten the code that is using the handler.
- handler->cost() is used to change a ALL_READ_COST or IO_AND_CPU_COST
  to 'cost in milliseconds'
- New functions:  handler::index_blocks() and handler::row_blocks()
  which are used to limit the IO.
- Added index_cost and row_cost to Cost_estimate and removed all not
  needed members.
- Removed cost coefficients from Cost_estimate as these don't make sense
  when costs (except IO_BLOCKS) are in milliseconds.
- Removed handler::avg_io_cost() and replaced it with DISK_READ_COST.
- Renamed best_range_rowid_filter_for_partial_join() to
  best_range_rowid_filter() as using the old name made rows too long.
- Changed all SJ_MATERIALIZATION_INFO 'Cost_estimate' variables to
  'double' as Cost_estimate power was not used for these and thus
  just caused storage and performance overhead.
- Changed cost_for_index_read() to use 'worst_seeks' to only limit
  IO, not number of table accesses. With this patch worst_seeks is
  probably not needed anymore, but I kept it around just in case.
- Applying cost for filter got to be much shorter and easier thanks
  to the API changes.
- Adjusted cost for fulltext keys in collaboration with Sergei Golubchik.
- Most test changes caused by this patch is that table scans are changed
  to use indexes.
- Added ha_seq::keyread_time() and ha_seq::key_scan_time() to get
  make checking number of potential IO blocks easier during debugging.
2023-02-02 23:57:30 +03:00
Monty
b66cdbd1ea Changing all cost calculation to be given in milliseconds
This makes it easier to compare different costs and also allows
the optimizer to optimizer different storage engines more reliably.

- Added tests/check_costs.pl, a tool to verify optimizer cost calculations.
  - Most engine costs has been found with this program. All steps to
    calculate the new costs are documented in Docs/optimizer_costs.txt

- User optimizer_cost variables are given in microseconds (as individual
  costs can be very small). Internally they are stored in ms.
- Changed DISK_READ_COST (was DISK_SEEK_BASE_COST) from a hard disk cost
  (9 ms) to common SSD cost (400MB/sec).
- Removed cost calculations for hard disks (rotation etc).
- Changed the following handler functions to return IO_AND_CPU_COST.
  This makes it easy to apply different cost modifiers in ha_..time()
  functions for io and cpu costs.
  - scan_time()
  - rnd_pos_time() & rnd_pos_call_time()
  - keyread_time()
- Enhanched keyread_time() to calculate the full cost of reading of a set
  of keys with a given number of ranges and optional number of blocks that
  need to be accessed.
- Removed read_time() as keyread_time() + rnd_pos_time() can do the same
  thing and more.
- Tuned cost for: heap, myisam, Aria, InnoDB, archive and MyRocks.
  Used heap table costs for json_table. The rest are using default engine
  costs.
- Added the following new optimizer variables:
  - optimizer_disk_read_ratio
  - optimizer_disk_read_cost
  - optimizer_key_lookup_cost
  - optimizer_row_lookup_cost
  - optimizer_row_next_find_cost
  - optimizer_scan_cost
- Moved all engine specific cost to OPTIMIZER_COSTS structure.
- Changed costs to use 'records_out' instead of 'records_read' when
  recalculating costs.
- Split optimizer_costs.h to optimizer_costs.h and optimizer_defaults.h.
  This allows one to change costs without having to compile a lot of
  files.
- Updated costs for filter lookup.
- Use a better cost estimate in best_extension_by_limited_search()
  for the sorting cost.
- Fixed previous issues with 'filtered' explain column as we are now
  using 'records_out' (min rows seen for table) to calculate filtering.
  This greatly simplifies the filtering code in
  JOIN_TAB::save_explain_data().

This change caused a lot of queries to be optimized differently than
before, which exposed different issues in the optimizer that needs to
be fixed.  These fixes are in the following commits.  To not have to
change the same test case over and over again, the changes in the test
cases are done in a single commit after all the critical change sets
are done.

InnoDB changes:
- Updated InnoDB to not divide big range cost with 2.
- Added cost for InnoDB (innobase_update_optimizer_costs()).
- Don't mark clustered primary key with HA_KEYREAD_ONLY. This will
  prevent that the optimizer is trying to use index-only scans on
  the clustered key.
- Disabled ha_innobase::scan_time() and ha_innobase::read_time() and
  ha_innobase::rnd_pos_time() as the default engine cost functions now
  works good for InnoDB.

Other things:
- Added  --show-query-costs (\Q) option to mysql.cc to show the query
  cost after each query (good when working with query costs).
- Extended my_getopt with GET_ADJUSTED_VALUE which allows one to adjust
  the value that user is given. This is used to change cost from
  microseconds (user input) to milliseconds (what the server is
  internally using).
- Added include/my_tracker.h  ; Useful include file to quickly test
  costs of a function.
- Use handler::set_table() in all places instead of 'table= arg'.
- Added SHOW_OPTIMIZER_COSTS to sys variables. These are input and
  shown in microseconds for the user but stored as milliseconds.
  This is to make the numbers easier to read for the user (less
  pre-zeros).  Implemented in 'Sys_var_optimizer_cost' class.
- In test_quick_select() do not use index scans if 'no_keyread' is set
  for the table. This is what we do in other places of the server.
- Added THD parameter to Unique::get_use_cost() and
  check_index_intersect_extension() and similar functions to be able
  to provide costs to called functions.
- Changed 'records' to 'rows' in optimizer_trace.
- Write more information to optimizer_trace.
- Added INDEX_BLOCK_FILL_FACTOR_MUL (4) and INDEX_BLOCK_FILL_FACTOR_DIV (3)
  to calculate usage space of keys in b-trees. (Before we used numeric
  constants).
- Removed code that assumed that b-trees has similar costs as binary
  trees. Replaced with engine calls that returns the cost.
- Added Bitmap::find_first_bit()
- Added timings to join_cache for ANALYZE table (patch by Sergei Petrunia).
- Added records_init and records_after_filter to POSITION to remember
  more of what best_access_patch() calculates.
- table_after_join_selectivity() changed to recalculate 'records_out'
  based on the new fields from best_access_patch()

Bug fixes:
- Some queries did not update last_query_cost (was 0). Fixed by moving
  setting thd->...last_query_cost in JOIN::optimize().
- Write '0' as number of rows for const tables with a matching row.

Some internals:
- Engine cost are stored in OPTIMIZER_COSTS structure.  When a
  handlerton is created, we also created a new cost variable for the
  handlerton. We also create a new variable if the user changes a
  optimizer cost for a not yet loaded handlerton either with command
  line arguments or with SET
  @@global.engine.optimizer_cost_variable=xx.
- There are 3 global OPTIMIZER_COSTS variables:
  default_optimizer_costs   The default costs + changes from the
                            command line without an engine specifier.
  heap_optimizer_costs      Heap table costs, used for temporary tables
  tmp_table_optimizer_costs The cost for the default on disk internal
                            temporary table (MyISAM or Aria)
- The engine cost for a table is stored in table_share. To speed up
  accesses the handler has a pointer to this. The cost is copied
  to the table on first access. If one wants to change the cost one
  must first update the global engine cost and then do a FLUSH TABLES.
  This was done to be able to access the costs for an open table
  without any locks.
- When a handlerton is created, the cost are updated the following way:
  See sql/keycaches.cc for details:
  - Use 'default_optimizer_costs' as a base
  - Call hton->update_optimizer_costs() to override with the engines
    default costs.
  - Override the costs that the user has specified for the engine.
  - One handler open, copy the engine cost from handlerton to TABLE_SHARE.
  - Call handler::update_optimizer_costs() to allow the engine to update
    cost for this particular table.
  - There are two costs stored in THD. These are copied to the handler
    when the table is used in a query:
    - optimizer_where_cost
    - optimizer_scan_setup_cost
- Simply code in best_access_path() by storing all cost result in a
  structure. (Idea/Suggestion by Igor)
2023-02-02 23:54:45 +03:00
Monty
b6215b9b20 Update row and key fetch cost models to take into account data copy costs
Before this patch, when calculating the cost of fetching and using a
row/key from the engine, we took into account the cost of finding a
row or key from the engine, but did not consistently take into account
index only accessed, clustered key or covered keys for all access
paths.

The cost of the WHERE clause (TIME_FOR_COMPARE) was not consistently
considered in best_access_path().  TIME_FOR_COMPARE was used in
calculation in other places, like greedy_search(), but was in some
cases (like scans) done an a different number of rows than was
accessed.

The cost calculation of row and index scans didn't take into account
the number of rows that where accessed, only the number of accepted
rows.

When using a filter, the cost of index_only_reads and cost of
accessing and disregarding 'filtered rows' where not taken into
account, which made filters cost less than there actually where.

To remedy the above, the following key & row fetch related costs
has been added:

- The cost of fetching and using a row is now split into different costs:
  - key + Row fetch cost (as before) but multiplied with the variable
  'optimizer_cache_cost' (default to 0.5). This allows the user to
  tell the optimizer the likehood of finding the key and row in the
  engine cache.
- ROW_COPY_COST, The cost copying a row from the engine to the
  sql layer or creating a row from the join_cache to the record
  buffer. Mostly affects table scan costs.
- ROW_LOOKUP_COST, the cost of fetching a row by rowid.
- KEY_COPY_COST the cost of finding the next key and copying it from
  the engine to the SQL layer. This is used when we calculate the cost
  index only reads. It makes index scans more expensive than before if
  they cover a lot of rows. (main.index_merge_myisam)
- KEY_LOOKUP_COST, the cost of finding the first key in a range.
  This replaces the old define IDX_LOOKUP_COST, but with a higher cost.
- KEY_NEXT_FIND_COST, the cost of finding the next key (and rowid).
  when doing a index scan and comparing the rowid to the filter.
  Before this cost was assumed to be 0.

All of the above constants/variables are now tuned to be somewhat in
proportion of executing complexity to each other.  There is tuning
need for these in the future, but that can wait until the above are
made user variables as that will make tuning much easier.

To make the usage of the above easy, there are new (not virtual)
cost calclation functions in handler:
- ha_read_time(), like read_time(), but take optimizer_cache_cost into
  account.
- ha_read_and_copy_time(), like ha_read_time() but take into account
  ROW_COPY_TIME
- ha_read_and_compare_time(), like ha_read_and_copy_time() but take
  TIME_FOR_COMPARE into account.
- ha_rnd_pos_time(). Read row with row id, taking ROW_COPY_COST
  into account.  This is used with filesort where we don't need
  to execute the WHERE clause again.
- ha_keyread_time(), like keyread_time() but take
  optimizer_cache_cost into account.
- ha_keyread_and_copy_time(), like ha_keyread_time(), but add
  KEY_COPY_COST.
- ha_key_scan_time(), like key_scan_time() but take
  optimizer_cache_cost nto account.
- ha_key_scan_and_compare_time(), like ha_key_scan_time(), but add
  KEY_COPY_COST & TIME_FOR_COMPARE.

I also added some setup costs for doing different types of scans and
creating temporary tables (on disk and in memory). This encourages
the optimizer to not use these for simple 'a few row' lookups if
there are adequate key lookup strategies.
- TABLE_SCAN_SETUP_COST, cost of starting a table scan.
- INDEX_SCAN_SETUP_COST, cost of starting an index scan.
- HEAP_TEMPTABLE_CREATE_COST, cost of creating in memory
  temporary table.
- DISK_TEMPTABLE_CREATE_COST, cost of creating an on disk temporary
  table.

When calculating cost of fetching ranges, we had a cost of
IDX_LOOKUP_COST (0.125) for doing a key div for a new range. This is
now replaced with 'io_cost * KEY_LOOKUP_COST (1.0) *
optimizer_cache_cost', which matches the cost we use for 'ref' and
other key lookups. The effect is that the cost is now a bit higher
when we have many ranges for a key.

Allmost all calculation with TIME_FOR_COMPARE is now done in
best_access_path(). 'JOIN::read_time' now includes the full
cost for finding the rows in the table.

In the result files, many of the changes are now again close to what
they where before the "Update cost for hash and cached joins" commit,
as that commit didn't fix the filter cost (too complex to do
everything in one commit).

The above changes showed a lot of a lot of inconsistencies in
optimizer cost calculation. The main objective with the other changes
was to do calculation as similar (and accurate) as possible and to make
different plans more comparable.

Detailed list of changes:

- Calculate index_only_cost consistently and correctly for all scan
  and ref accesses. The row fetch_cost and index_only_cost now
  takes into account clustered keys, covered keys and index
  only accesses.
- cost_for_index_read now returns both full cost and index_only_cost
- Fixed cost calculation of get_sweep_read_cost() to match other
  similar costs. This is bases on the assumption that data is more
  often stored on SSD than a hard disk.
- Replaced constant 2.0 with new define TABLE_SCAN_SETUP_COST.
- Some scan cost estimates did not take into account
  TIME_FOR_COMPARE. Now all scan costs takes this into
  account. (main.show_explain)
- Added session variable optimizer_cache_hit_ratio (default 50%). By
  adjusting this on can reduce or increase the cost of index or direct
  record lookups. The effect of the default is that key lookups is now
  a bit cheaper than before. See usage of 'optimizer_cache_cost' in
  handler.h.
- JOIN_TAB::scan_time() did not take into account index only scans,
  which produced a wrong cost when index scan was used. Changed
  JOIN_TAB:::scan_time() to take into consideration clustered and
  covered keys. The values are now cached and we only have to call
  this function once. Other calls are changed to use the cached
  values.  Function renamed to JOIN_TAB::estimate_scan_time().
- Fixed that most index cost calculations are done the same way and
  more close to 'range' calculations. The cost is now lower than
  before for small data sets and higher for large data sets as we take
  into account how many keys are read (main.opt_trace_selectivity,
  main.limit_rows_examined).
- Ensured that index_scan_cost() ==
  range(scan_of_all_rows_in_table_using_one_range) +
  MULTI_RANGE_READ_INFO_CONST. One effect of this is that if there
  is choice of doing a full index scan and a range-index scan over
  almost the whole table then index scan will be preferred (no
  range-read setup cost).  (innodb.innodb, main.show_explain,
  main.range)
  - Fixed the EQ_REF and REF takes into account clustered and covered
    keys.  This changes some plans to use covered or clustered indexes
    as these are much cheaper.  (main.subselect_mat_cost,
    main.state_tables_innodb, main.limit_rows_examined)
  - Rowid filter setup cost and filter compare cost now takes into
    account fetching and checking the rowid (KEY_NEXT_FIND_COST).
    (main.partition_pruning heap.heap_btree main.log_state)
  - Added KEY_NEXT_FIND_COST to
    Range_rowid_filter_cost_info::lookup_cost to account of the time
    to find and check the next key value against the container
  - Introduced ha_keyread_time(rows) that takes into account finding
    the next row and copying the key value to 'record'
    (KEY_COPY_COST).
  - Introduced ha_key_scan_time() for calculating an index scan over
    all rows.
  - Added IDX_LOOKUP_COST to keyread_time() as a startup cost.
  - Added index_only_fetch_cost() as a convenience function to
    OPT_RANGE.
  - keyread_time() cost is slightly reduced to prefer shorter keys.
    (main.index_merge_myisam)
  - All of the above caused some index_merge combinations to be
    rejected because of cost (main.index_intersect). In some cases
    'ref' where replaced with index_merge because of the low
    cost calculation of get_sweep_read_cost().
  - Some index usage moved from PRIMARY to a covering index.
    (main.subselect_innodb)
- Changed cost calculation of filter to take KEY_LOOKUP_COST and
  TIME_FOR_COMPARE into account.  See sql_select.cc::apply_filter().
  filter parameters and costs are now written to optimizer_trace.
- Don't use matchings_records_in_range() to try to estimate the number
  of filtered rows for ranges. The reason is that we want to ensure
  that 'range' is calculated similar to 'ref'. There is also more work
  needed to calculate the selectivity when using ranges and ranges and
  filtering.  This causes filtering column in EXPLAIN EXTENDED to be
  100.00 for some cases where range cannot use filtering.
  (main.rowid_filter)
- Introduced ha_scan_time() that takes into account the CPU cost of
  finding the next row and copying the row from the engine to
  'record'. This causes costs of table scan to slightly increase and
  some test to changed their plan from ALL to RANGE or ALL to ref.
  (innodb.innodb_mysql, main.select_pkeycache)
  In a few cases where scan time of very small tables have lower cost
  than a ref or range, things changed from ref/range to ALL.
  (main.myisam, main.func_group, main.limit_rows_examined,
  main.subselect2)
- Introduced ha_scan_and_compare_time() which is like ha_scan_time()
  but also adds the cost of the where clause (TIME_FOR_COMPARE).
- Added small cost for creating temporary table for
  materialization. This causes some very small tables to use scan
  instead of materialization.
- Added checking of the WHERE clause (TIME_FOR_COMPARE) of the
  accepted rows to ROR costs in get_best_ror_intersect()
- Removed '- 0.001' from 'join->best_read' and optimize_straight_join()
  to ensure that the 'Last_query_cost' status variable contains the
  same value as the one that was calculated by the optimizer.
- Take avg_io_cost() into account in handler::keyread_time() and
  handler::read_time(). This should have no effect as it's 1.0 by
  default, except for heap that overrides these functions.
- Some 'ref_or_null' accesses changed to 'range' because of cost
  adjustments (main.order_by)
- Added scan type "scan_with_join_cache" for optimizer_trace. This is
  just to show in the trace what kind of scan was used.
- When using 'scan_with_join_cache' take into account number of
  preceding tables (as have to restore all fields for all previous
  table combination when checking the where clause)
  The new cost added is:
  (row_combinations * ROW_COPY_COST * number_of_cached_tables).
  This increases the cost of join buffering in proportion of the
  number of tables in the join buffer. One effect is that full scans
  are now done earlier as the cost is then smaller.
  (main.join_outer_innodb, main.greedy_optimizer)
- Removed the usage of 'worst_seeks' in cost_for_index_read as it
  caused wrong plans to be created; It prefered JT_EQ_REF even if it
  would be much more expensive than a full table scan. A related
  issue was that worst_seeks only applied to full lookup, not to
  clustered or index only lookups, which is not consistent. This
  caused some plans to use index scan instead of eq_ref (main.union)
- Changed federated block size from 4096 to 1500, which is the
  typical size of an IO packet.
- Added costs for reading rows to Federated. Needed as there is no
  caching of rows in the federated engine.
- Added ha_innobase::rnd_pos_time() cost function.
- A lot of extra things added to optimizer trace
  - More costs, especially for materialization and index_merge.
  - Make lables more uniform
  - Fixed a lot of minor bugs
  - Added 'trace_started()' around a lot of trace blocks.
- When calculating ORDER BY with LIMIT cost for using an index
  the cost did not take into account the number of row retrivals
  that has to be done or the cost of comparing the rows with the
  WHERE clause. The cost calculated would be just a fraction of
  the real cost. Now we calculate the cost as we do for ranges
  and 'ref'.
- 'Using index for group-by' is used a bit more than before as
  now take into account the WHERE clause cost when comparing
  with 'ref' and prefer the method with fewer row combinations.
  (main.group_min_max).

Bugs fixed:
- Fixed that we don't calculate TIME_FOR_COMPARE twice for some plans,
  like in optimize_straight_join() and greedy_search()
- Fixed bug in save_explain_data where we could test for the wrong
  index when displaying 'Using index'. This caused some old plans to
  show 'Using index'.  (main.subselect_innodb, main.subselect2)
- Fixed bug in get_best_ror_intersect() where 'min_cost' was not
  updated, and the cost we compared with was not the one that was
  used.
- Fixed very wrong cost calculation for priority queues in
  check_if_pq_applicable(). (main.order_by now correctly uses priority
  queue)
- When calculating cost of EQ_REF or REF, we added the cost of
  comparing the WHERE clause with the found rows, not all row
  combinations. This made ref and eq_ref to be regarded way to cheap
  compared to other access methods.
- FORCE INDEX cost calculation didn't take into account clustered or
  covered indexes.
- JT_EQ_REF cost was estimated as avg_io_cost(), which is half the
  cost of a JT_REF key. This may be true for InnoDB primary key, but
  not for other unique keys or other engines. Now we use handler
  function to calculate the cost, which allows us to handle
  consistently clustered, covered keys and not covered keys.
- ha_start_keyread() didn't call extra_opt() if keyread was already
  enabled but still changed the 'keyread' variable (which is wrong).
  Fixed by not doing anything if keyread is already enabled.
- multi_range_read_info_cost() didn't take into account io_cost when
  calculating the cost of ranges.
- fix_semijoin_strategies_for_picked_join_order() used the wrong
  record_count when calling best_access_path() for SJ_OPT_FIRST_MATCH
  and SJ_OPT_LOOSE_SCAN.
- Hash joins didn't provide correct best_cost to the upper level, which
  means that the cost for hash_joins more expensive than calculated
  in best_access_path (a difference of 10x * TIME_OF_COMPARE).
  This is fixed in the new code thanks to that we now include
  TIME_OF_COMPARE cost in 'read_time'.

Other things:
- Added some 'if (thd->trace_started())' to speed up code
- Removed not used function Cost_estimate::is_zero()
- Simplified testing of HA_POS_ERROR in get_best_ror_intersect().
  (No cost changes)
- Moved ha_start_keyread() from join_read_const_table() to join_read_const()
  to enable keyread for all types of JT_CONST tables.
- Made a few very short functions inline in handler.h

Notes:
- In main.rowid_filter the join order of order and lineitem is swapped.
  This is because the cost of doing a range fetch of lineitem(98 rows) is
  almost as big as the whole join of order,lineitem. The filtering will
  also ensure that we only have to do very small key fetches of the rows
  in lineitem.
- main.index_merge_myisam had a few changes where we are now using
  less keys for index_merge. This is because index scans are now more
  expensive than before.
- handler->optimizer_cache_cost is updated in ha_external_lock().
  This ensures that it is up to date per statements.
  Not an optimal solution (for locked tables), but should be ok for now.
- 'DELETE FROM t1 WHERE t1.a > 0 ORDER BY t1.a' does not take cost of
  filesort into consideration when table scan is chosen.
  (main.myisam_explain_non_select_all)
- perfschema.table_aggregate_global_* has changed because an update
  on a table with 1 row will now use table scan instead of key lookup.

TODO in upcomming commits:
- Fix selectivity calculation for ranges with and without filtering and
  when there is a ref access but scan is chosen.
  For this we have to store the lowest known value for
  'accepted_records' in the OPT_RANGE structure.
- Change that records_read does not include filtered rows.
- test_if_cheaper_ordering() needs to be updated to properly calculate
  costs. This will fix tests like main.order_by_innodb,
  main.single_delete_update
- Extend get_range_limit_read_cost() to take into considering
  cost_for_index_read() if there where no quick keys. This will reduce
  the computed cost for ORDER BY with LIMIT in some cases.
  (main.innodb_ext_key)
- Fix that we take into account selectivity when counting the number
  of rows we have to read when considering using a index table scan to
  resolve ORDER BY.
- Add new calculation for rnd_pos_time() where we take into account the
  benefit of reading multiple rows from the same page.
2023-02-02 21:43:30 +03:00
Monty
4062fc28bd Optimizer code cleanups, no logic changes
- Updated comments
- Added some extra DEBUG
- Indentation changes and break long lines
- Trivial code changes like:
  - Combining 2 statements in one
  - Reorder DBUG lines
  - Use a variable to store a pointer that is used multiple times
- Moved declaration of variables to start of loop/function
- Removed dead or commented code
- Removed wrong DBUG_EXECUTE code in best_extension_by_limited_search()
2023-01-30 15:22:21 +02:00
Sergei Petrunia
f0ea7f7f33 MDEV-28749: restore_prev_nj_state() doesn't update cur_sj_inner_tables correctly
(Try 2)

The code that updates semi-join optimization state for a join order prefix
had several bugs. The visible effect was bad optimization for FirstMatch or
LooseScan strategies: they either weren't considered when they should have
been, or considered when they shouldn't have been.

In order to hit the bug, the optimizer needs to consider several different
join prefixes in a certain order. Queries with "obvious" query plans which
prune all join orders except one are not affected.

Internally, the bugs in updates of semi-join state were:
1. restore_prev_sj_state() assumed that
  "we assume remaining_tables doesnt contain @tab"
  which wasn't true.
2. Another bug in this function: it did remove bits from
   join->cur_sj_inner_tables but never added them.
3. greedy_search() adds tables into the join prefix but neglects to update
   the semi-join optimization state. (It does update nested outer join
   state, see this call:
     check_interleaving_with_nj(best_table)
   but there's no matching call to update the semi-join state.
   (This wasn't visible because most of the state is in the POSITION
    structure which is updated. But there is also state in JOIN, too)

The patch:
- Fixes all of the above
- Adds JOIN::dbug_verify_sj_inner_tables() which is used to verify the
  state is correct at every step.
- Renames advance_sj_state() to optimize_semi_joins().
  = Introduces update_sj_state() which ideally should have been called
    "advance_sj_state" but I didn't reuse the name to not create confusion.
2022-06-07 20:43:10 +03:00
Marko Mäkelä
12414cd9f2 Merge 10.4 into 10.5 2019-09-27 19:12:07 +03:00
Marko Mäkelä
9b5cdeeb0f Merge 10.3 into 10.4 2019-09-27 16:26:53 +03:00
Marko Mäkelä
2911a9a693 Merge 10.2 into 10.3 2019-09-27 15:56:15 +03:00
Marko Mäkelä
ca9e0089d5 MDEV-19740: Fix GCC 9.2.1 -Wmaybe-uninitialized on AMD64
For CMAKE_BUILD_TYPE=Debug, the default MYSQL_MAINTAINER_MODE=AUTO
implies -Werror along with other flags in cmake/maintainer.cmake,
which would break the debug builds when CMAKE_CXX_FLAGS include -O2.

This fix includes a backport of 6dd3f24090
from MariaDB 10.3.
2019-09-27 10:43:23 +03:00
Varun Gupta
8e92d5e5e3 MDEV-20468: Allocating more space than required for JOIN_TAB array for a query with SJM table 2019-09-24 21:10:25 +05:30
Marko Mäkelä
5a92ccbaea Merge 10.3 into 10.4
Disable MDEV-20576 assertions until MDEV-20595 has been fixed.
2019-09-23 17:35:29 +03:00
Marko Mäkelä
c016ea660e Merge 10.2 into 10.3 2019-09-23 10:25:34 +03:00
Sergei Petrunia
c8dc866fde MDEV-20371: Invalid reads at plan refinement stage: join->positions...
best_access_path() is called from two optimization phases:

1. Plan choice phase, in choose_plan(). Here, the join prefix being
   considered is in join->positions[]

2. Plan refinement stage, in fix_semijoin_strategies_for_picked_join_order
   Here, the join prefix is in join->best_positions[]

It used to access join->positions[] from stage #2. This didnt cause any
valgrind or asan failures (as join->positions[] has been written-to before)
but the effect was similar to that of reading the random data:
The join prefix we've picked (in join->best_positions) could have
nothing in common with the join prefix that was last to be considered
(in join->positions).
2019-09-11 17:06:50 +03:00
Marko Mäkelä
efb8485d85 Merge 10.3 into 10.4, except for MDEV-20265
The MDEV-20265 commit e746f451d5
introduces DBUG_ASSERT(right_op == r_tbl) in
st_select_lex::add_cross_joined_table(), and that assertion would
fail in several tests that exercise joins. That commit was skipped
in this merge, and a separate fix of MDEV-20265 will be necessary in 10.4.
2019-08-23 08:06:17 +03:00
Aleksey Midenkov
6dd3f24090 MDEV-19740 Debug build of 10.3.15 FTBFS
* Replace LINT_INIT for non-struct types with ctor initializers;
* Check BUILD_DEPS list is not empty so REMOVE_DUPLICATES won't throw
  error.
2019-08-19 10:38:24 +03:00
Oleksandr Byelkin
c07325f932 Merge branch '10.3' into 10.4 2019-05-19 20:55:37 +02:00
Marko Mäkelä
be85d3e61b Merge 10.2 into 10.3 2019-05-14 17:18:46 +03:00
Vicențiu Ciorbaru
f177f125d4 Merge branch '5.5' into 10.1 2019-05-11 19:15:57 +03:00
Michal Schorm
17b4f99928 Update FSF address
This commit is based on the work of Michal Schorm, rebased on the
earliest MariaDB version.

Th command line used to generate this diff was:

find ./ -type f \
  -exec sed -i -e 's/Foundation, Inc., 59 Temple Place, Suite 330, Boston, /Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, /g' {} \; \
  -exec sed -i -e 's/Foundation, Inc. 59 Temple Place.* Suite 330, Boston, /Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, /g' {} \; \
  -exec sed -i -e 's/MA.*.....-1307.*USA/MA 02110-1335 USA/g' {} \; \
  -exec sed -i -e 's/Foundation, Inc., 59 Temple/Foundation, Inc., 51 Franklin/g' {} \; \
  -exec sed -i -e 's/Place, Suite 330, Boston, MA.*02111-1307.*USA/Street, Fifth Floor, Boston, MA 02110-1335 USA/g' {} \; \
  -exec sed -i -e 's/MA.*.....-1307/MA 02110-1335/g' {} \;
2019-05-10 20:52:00 +03:00
Galina Shalygina
7a77b221f1 MDEV-7486: Condition pushdown from HAVING into WHERE
Condition can be pushed from the HAVING clause into the WHERE clause
if it depends only on the fields that are used in the GROUP BY list
or depends on the fields that are equal to grouping fields.
Aggregate functions can't be pushed down.

How the pushdown is performed on the example:

SELECT t1.a,MAX(t1.b)
FROM t1
GROUP BY t1.a
HAVING (t1.a>2) AND (MAX(c)>12);

=>

SELECT t1.a,MAX(t1.b)
FROM t1
WHERE (t1.a>2)
GROUP BY t1.a
HAVING (MAX(c)>12);

The implementation scheme:

1. Extract the most restrictive condition cond from the HAVING clause of
   the select that depends only on the fields that are used in the GROUP BY
   list of the select (directly or indirectly through equalities)
2. Save cond as a condition that can be pushed into the WHERE clause
   of the select
3. Remove cond from the HAVING clause if it is possible

The optimization is implemented in the function
st_select_lex::pushdown_from_having_into_where().

New test file having_cond_pushdown.test is created.
2019-02-17 23:38:44 -08:00
Igor Babaev
658128af43 MDEV-16188 Use in-memory PK filters built from range index scans
This patch contains a full implementation of the optimization
that allows to use in-memory rowid / primary filters built for range  
conditions over indexes. In many cases usage of such filters reduce  
the number of disk seeks spent for fetching table rows.

In this implementation the choice of what possible filter to be applied  
(if any) is made purely on cost-based considerations.

This implementation re-achitectured the partial implementation of
the feature pushed by Galina Shalygina in the commit
8d5a11122c.

Besides this patch contains a better implementation of the generic  
handler function handler::multi_range_read_info_const() that
takes into account gaps between ranges when calculating the cost of
range index scans. It also contains some corrections of the
implementation of the handler function records_in_range() for MyISAM.

This patch supports the feature for InnoDB and MyISAM.
2019-02-03 14:56:12 -08:00
Galina Shalygina
d3ff133390 MDEV-12387 Push conditions into materialized subqueries
The logic and the implementation scheme are similar with the
MDEV-9197 Pushdown conditions into non-mergeable views/derived tables

How the push down is made on the example:

select * from t1
where a>3 and b>10 and
 (a,b) in (select x,max(y) from t2 group by x);

-->

select * from t1
where a>3 and b>10 and
  (a,b) in (select x,max(y)
            from t2
            where x>3
            group by x
            having max(y)>10);

The implementation scheme:

1. Search for the condition cond that depends only on the fields
   from the left part of the IN subquery (left_part)
2. Find fields F_group in the select of the right part of the
   IN subquery (right_part) that are used in the GROUP BY
3. Extract from the cond condition cond_where that depends only on the
   fields from the left_part that stay at the same places in the left_part
   (have the same indexes) as the F_group fields in the projection of the
   right_part
4. Transform cond_where so it can be pushed into the WHERE clause of the
   right_part and delete cond_where from the cond
5. Transform cond so it can be pushed into the HAVING clause of the right_part

The optimization is made in the
Item_in_subselect::pushdown_cond_for_in_subquery() and is controlled by the
variable condition_pushdown_for_subquery.

New test file in_subq_cond_pushdown.test is created.

There are also some changes made for setup_jtbm_semi_joins().
Now it is decomposed into the 2 procedures: setup_degenerate_jtbm_semi_joins()
that is called before optimize_cond() for cond and setup_jtbm_semi_joins()
that is called after optimize_cond().
New setup_jtbm_semi_joins() is made in the way so that the result of its work is
the same as if it was called before optimize_cond().

The code that is common for pushdown into materialized derived and into materialized
IN subqueries is factored out into pushdown_cond_for_derived(),
Item_in_subselect::pushdown_cond_for_in_subquery() and
st_select_lex::pushdown_cond_into_where_clause().
2018-05-15 23:45:59 +02:00
Michael Widenius
cc77f9882d Changed KEY names to use LEX_CSTRING 2017-08-24 01:05:53 +02:00
iangilfillan
f0ec34002a Correct FSF address 2017-03-10 18:21:29 +01:00
Sergei Golubchik
530a6e7481 Merge branch '10.0' into 10.1
referenced_by_foreign_key2(), needed for InnoDB to compile,
was taken from 10.0-galera
2015-09-03 12:58:41 +02:00
Monty
3bca8db4f9 MDEV-6152: Remove calls to current_thd while creating Item
- Part 4: Removing calls to sql_alloc() and sql_calloc()

Other things:
- Added current_thd in some places to make it clear that it's called (easier to remove later)
- Move memory allocation from Item_func_case::fix_length_and_dec() to Item_func_case::fix_fields()
- Added mem_root to some new calls
- Fixed some wrong UNINIT_VAR() calls
- Fixed a bug in generate_partition_syntax() in case of errors
- Added mem_root to argument to new thread_info
- Simplified my_parse_error() call in sql_yacc.yy
2015-08-27 22:29:11 +03:00
Sergei Petrunia
9b475ee3c1 MDEV-8289: Semijoin inflates number of rows in query result
- Make semi-join optimizer not to choose LooseScan
  when 1) the index is not covered and 2) full index
  scan will be required.

- Make sure that the code in make_join_select() that may change
  full index scan into a range scan is not invoked when the table
  uses full scan.
2015-08-18 22:54:42 +03:00
Sergei Golubchik
49c853fb94 Merge branch '5.5' into 10.0 2015-05-04 22:00:24 +02:00
Sergei Petrunia
c020d362b6 MDEV-7474: Semi-Join's DuplicateWeedout strategy skipped ...
JOIN::cur_dups_producing_tables was not maintained correctly in
the cases of greedy optimization (search_depth < n_tables).

Moved it to POSITION structure where it will be maintained automatically.

Removed POSITION::prefix_dups_producing_tables since its value can now
be calculated.
2015-03-17 13:26:33 +03:00
Sergei Golubchik
0dc23679c8 10.0-base merge 2014-02-26 15:28:07 +01:00
Sergei Golubchik
0b9a0a3517 5.5 merge 2014-02-25 16:04:35 +01:00
Sergey Vojtovich
d12c7adf71 MDEV-5314 - Compiling fails on OSX using clang
This is port of fix for MySQL BUG#17647863.

revno: 5572
revision-id: jon.hauglid@oracle.com-20131030232243-b0pw98oy72uka2sj
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
timestamp: Thu 2013-10-31 00:22:43 +0100
message:
  Bug#17647863: MYSQL DOES NOT COMPILE ON OSX 10.9 GM

  Rename test() macro to MY_TEST() to avoid conflict with libc++.
2014-02-19 14:05:15 +04:00
Igor Babaev
f17079fa7e Merge 5.3->5.5 2014-02-10 17:00:51 -08:00
Sergey Petrunya
34b6f51dab MDEV-5582: Plugin 'MEMORY' has ref_count=1 after shutdown with materialization+semijoin
- Let cleanup_empty_jtbm_semi_joins() walk into semi-join nests.
2014-02-07 20:51:31 +04:00
unknown
9d83468e78 merge 5.5 -> 10.0-base 2013-09-25 21:07:06 +03:00
unknown
e5746665c9 merge 10.0-base -> 10.0 2013-09-26 21:20:15 +03:00
Sergei Golubchik
9af177042e 10.0-base merge.
Partitioning/InnoDB changes are *not* merged (they'll come from 5.6)
TokuDB does not compile (not updated to 10.0 SE API)
2013-09-21 10:14:42 +02:00
Sergei Golubchik
4ec2e9d7ed 5.5 merge and fixes for compiler/test errors 2013-09-18 13:07:31 +02:00
Sergey Petrunya
422c55a240 MDEV-5037: Server crash on a JOIN on a derived table with join_cache_level > 2
- The crash was caused because the optimizer called handler->multi_range_read_info()
  on a derived temporary table.  That table has been created, but not opened yet.
  Because of that, handler::table was NULL, which caused crash.
  Fixed by changing DS-MRR methods to use handler::table_share instead. 
  handler::table_share is set in handler ctor, so this should be safe.
2013-09-20 14:47:38 +04:00
Sergey Petrunya
33f807fd91 Merge 5.3 -> 5.5 2013-09-12 13:54:46 +04:00
Sergey Petrunya
7e4845beea MDEV-5011: ERROR Plugin 'MEMORY' has ref_count=1 after shutdown for SJM queries
- Provide a special execution path for cleanup of degenerate 
  non-merged semi-join children of degenerate selects.
2013-09-12 13:53:13 +04:00
Igor Babaev
a1cd28e2e5 Merge 10.0-base -> 10.0 2013-04-17 10:18:04 -07:00
Igor Babaev
fc1c8ffdad The pilot patch for mwl#253. 2013-03-11 07:44:24 -07:00
Sergei Golubchik
474fe6d9d9 fixes for test failures
and small collateral changes

mysql-test/lib/My/Test.pm:
  somehow with "print" we get truncated writes sometimes
mysql-test/suite/perfschema/r/digest_table_full.result:
  md5 hashes of statement digests differ, because yacc token codes are different in mariadb
mysql-test/suite/perfschema/r/dml_handler.result:
  host table is not ported over yet
mysql-test/suite/perfschema/r/information_schema.result:
  host table is not ported over yet
mysql-test/suite/perfschema/r/nesting.result:
  this differs, because we don't rewrite general log queries, and multi-statement
  packets are logged as a one entry. this result file is identical to what mysql-5.6.5
  produces with the --log-raw option.
mysql-test/suite/perfschema/r/relaylog.result:
  MariaDB modifies the binlog index file directly, while MySQL 5.6 has a feature "crash-safe binlog index" and modifies a special "crash-safe" shadow copy of the index file and then moves it over. That's why this test shows "NONE" index file writes in MySQL and "MANY" in MariaDB.
mysql-test/suite/perfschema/r/server_init.result:
  MariaDB initializes the "manager" resources from the "manager" thread, and starts this thread only when --flush-time is not 0. MySQL 5.6 initializes "manager" resources unconditionally on server startup.
mysql-test/suite/perfschema/r/stage_mdl_global.result:
  this differs, because MariaDB disables query cache when query_cache_size=0. MySQL does not
  do that, and this causes useless mutex locks and waits.
mysql-test/suite/perfschema/r/statement_digest.result:
  md5 hashes of statement digests differ, because yacc token codes are different in mariadb
mysql-test/suite/perfschema/r/statement_digest_consumers.result:
  md5 hashes of statement digests differ, because yacc token codes are different in mariadb
mysql-test/suite/perfschema/r/statement_digest_long_query.result:
  md5 hashes of statement digests differ, because yacc token codes are different in mariadb
mysql-test/suite/rpl/r/rpl_mixed_drop_create_temp_table.result:
  will be updated to match 5.6 when alfranio.correia@oracle.com-20110512172919-c1b5kmum4h52g0ni and anders.song@greatopensource.com-20110105052107-zoab0bsf5a6xxk2y are merged
mysql-test/suite/rpl/r/rpl_non_direct_mixed_mixing_engines.result:
  will be updated to match 5.6 when anders.song@greatopensource.com-20110105052107-zoab0bsf5a6xxk2y is merged
2012-09-27 20:09:46 +02:00
Sergei Golubchik
44cf9ee5f7 5.3 merge 2012-05-04 07:16:38 +02:00