• Forward:Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance


    For the details, please visit the following address:

    www.sqlworkshops.com.


    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.

    I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).

    When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.

    We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.

    I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.

    Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.

    Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables.

    The script to understand the issue and the workaround is listed below:
    set nocount on
    set statistics time off
    set statistics io off
    drop table tab7
    go
    create table tab7 (c1 int primary key clustered, c2 int, c3 char(200))
    go
    create index test on tab7(c2, c1, c3)
    go
    begin tran
    declare @i int
    set @i = 1
    while @i <= 50000
    begin
    insert into tab7 values (@i, 1, 'a')
    set @i = @i + 1
    end
    commit tran
    go
    insert into tab7 values (50001, 1, 'a')
    go
    checkpoint
    go
    drop proc test_slow
    go
    create proc test_slow @i int
    as
    begin
    declare @j int
    create table #temp1 (c1 int primary key)
    insert into #temp1 (c1) select @i
    select @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)
    end
    go
    dbcc dropcleanbuffers
    set statistics time on
    set statistics io on
    go
    --high reads as expected for parameter '1'
    exec test_slow 1
    go
    dbcc dropcleanbuffers
    go
    --high reads that are not expected for parameter '2'
    exec test_slow 2
    go
    drop proc test_with_recompile
    go
    create proc test_with_recompile @i int
    as
    begin
    declare @j int
    create table #temp1 (c1 int primary key)
    insert into #temp1 (c1) select @i
    select @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)
    option (recompile)
    end
    go
    dbcc dropcleanbuffers
    set statistics time on
    set statistics io on
    go
    --high reads as expected for parameter '1'
    exec test_with_recompile 1
    go
    dbcc dropcleanbuffers
    go
    --high reads that are not expected for parameter '2'
    --low reads on 3rd execution as expected for parameter '2'
    exec test_with_recompile 2
    go
    drop proc test_with_alter_table_recompile
    go
    create proc test_with_alter_table_recompile @i int
    as
    begin
    declare @j int
    create table #temp1 (c1 int primary key)
    --to avoid caching of temporary tables one can create a constraint
    --but this might lead to duplicate constraint name error on concurrent usage
    alter table #temp1 add constraint test123 unique(c1)
    insert into #temp1 (c1) select @i
    select @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)
    option (recompile)
    end
    go
    dbcc dropcleanbuffers
    set statistics time on
    set statistics io on
    go
    --high reads as expected for parameter '1'
    exec test_with_alter_table_recompile 1
    go
    dbcc dropcleanbuffers
    go
    --low reads as expected for parameter '2'
    exec test_with_alter_table_recompile 2
    go
    drop proc test_with_index_recompile
    go
    create proc test_with_index_recompile @i int
    as
    begin
    declare @j int
    create table #temp1 (c1 int primary key)
    --to avoid caching of temporary tables one can create an index
    create index test on #temp1(c1)
    insert into #temp1 (c1) select @i
    select @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)
    option (recompile)
    end
    go
    set statistics time on
    set statistics io on
    dbcc dropcleanbuffers
    go
    --high reads as expected for parameter '1'
    exec test_with_index_recompile 1
    go
    dbcc dropcleanbuffers
    go
    --low reads as expected for parameter '2'
    exec test_with_index_recompile 2
    go

  • 相关阅读:
    clickhouse集群部署
    zookeeper集群部署
    linux下安装多路径multipath
    采用xtrabackup部署主从同步(生产)
    部署dg备库同步ogg
    goldengate同步(主库rac从库单节点)
    python3发送邮件
    python2发送邮件
    pip版本过高导致报错
    ERROR 1558 (HY000): Column count of mysql.user is wrong
  • 原文地址:https://www.cnblogs.com/bigholy/p/2641659.html
Copyright © 2020-2023  润新知