• follow me on twitter

Concurrency Exchange

Understanding SQL Server locking, blocking, and deadlocking

  • About

Tag: object_id

metadata

Built-in metadata functions and blocking

August 12, 2018August 12, 2018by jmanskeLeave a Comment on Built-in metadata functions and blocking

I have seen code in the wild that uses built-in metadata functions in SQL Server, like OBJECT_ID, to determine if an object exists, or OBJECT_NAME, to get the name of an object. I’m talking about code patterns like:

IF EXISTS( SELECT 1 FROM sys.objects WHERE OBJECT_ID = OBJECT_ID( 'dbo.TableName' ) )
BEGIN
  [...]
END

Depending on what else your application is doing, this can cause a lot of unnecessary blocking. I have seen applications that do DDL on the fly, doing things like:

  • Adding columns to existing tables
  • creating “temporary” tables that are not true SQL Server temporary tables
  • Rebuilding or reorganizing indexes during an ETL process
  • These kinds of actions require exclusive locks on underlying SQL Server metadata objects. We won’t ever have total control over how SQL Server interacts with these objects, so it is my opinion that we reduce blocking on these objects whenever possible.

    This is one of the few times where I think that using the READ UNCOMMITTED isolation level can be safe and effective. I don’t claim to be treading new ground here. This has been blogged about before, maybe most succinctly by Aaron Bertrand in this blog post.

    My suggestion would be to write your own APIs that enforce the READ UNCOMMITTED isolation level. A first attempt at doing this might look like this:

    CREATE OR ALTER FUNCTION dbo.ObjectId
    ( @SchemaName sysname,
      @ObjectName sysname )
    RETURNS int
    AS
    BEGIN
      SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
    
      DECLARE @ObjectId int;
    
      SELECT @ObjectId = Objs.object_id
        FROM sys.objects Objs
          INNER JOIN sys.schemas Schs
            ON Objs.schema_id = Schs.schema_id
        WHERE Schs.name = @SchemaName
          AND Objs.name = @ObjectName;
    
      RETURN @ObjectId;
    END;
    GO

    But we get this compile-time error:

    Msg 443, Level 16, State 15, Procedure ObjectId, Line 8 [Batch Start Line 89]
    Invalid use of a side-effecting operator 'SET TRANSACTION ISOLATION LEVEL' within a function.

    If you want to use a function for this purpose, you will need to use WITH( NOLOCK ) on every catalog view in your query. You could also elect to use a procedure with an OUTPUT variable if that seems cleaner to you, it might look something like this:

    CREATE OR ALTER PROCEDURE dbo.ObjectId
    (
      @SchemaName sysname,
      @ObjectName sysname,
      @ObjectId int OUTPUT )
    AS
    BEGIN
      SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
    
      SELECT @ObjectId = Objs.object_id
        FROM sys.objects Objs
          INNER JOIN sys.schemas Schs
            ON Objs.schema_id = Schs.schema_id
        WHERE Schs.name = @SchemaName
          AND Objs.name = @ObjectName;
    END;
    GO
    
    -- call the SP
    DECLARE @ObjectId int;
    EXECUTE dbo.ObjectId @SchemaName = N'dbo',
                         @ObjectName = N'ExampleObject',
                         @ObjectId = @ObjectId OUTPUT;
    SELECT @ObjectId;
    GO

    By writing your own APIs, you can also control what return values you get if the object doesn’t exist. You can return -1 instead of NULL to make consuming code easier to write, for example.

    Why even bother?

    Aren’t these metadata queries typically very fast? Who cares about blocking in these cases?

    If you are writing targeted SQL Server metadata queries (as in, limited to a single object), I think it is best to

  • first write a no-blocking query to get the object_id,
  • then to write a second query that targets the catalog view that contains the information that you need.
  • Sometimes larger metadata queries that access a few different catalog views are victimized by optimizer timeouts when a plan is being generated because the underlying structure of the views might be a lot more complicated than your query is letting on. What you end up with is a suboptimal plan that might do a scan in production where it might have been doing a seek in development. This leads to unexpected blocking. Blocking can lead to deadlocks. Deadlocks can be show-stopping problems depending on where they occur in the application.

    It can be quite an undertaking to write a complete family of APIs to get every piece of metadata about every kind of object you might want. I think it makes more sense to follow good rules like the above than trying to tackle that larger project. This will allow you query SQL Server metadata in the safest possible way.

    Tags

    deadlock deadlockanatomy hints metadata object_id TABLOCK TABLOCKX transactionname

    Follow me on Twitter

    My Tweets

    Recent Posts

    • Partition switching needs to be smarter
    • Deadlock anatomy – transactionname
    • Built-in metadata functions and blocking
    • TABLOCK is not always the table lock you were expecting

    Archives

    • July 2019
    • October 2018
    • August 2018
    • July 2018
    • follow me on twitter
    Create a website or blog at WordPress.com
    • Follow Following
      • Concurrency Exchange
      • Already have a WordPress.com account? Log in now.
      • Concurrency Exchange
      • Customize
      • Follow Following
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar