Nov 26, 2007

Conditional TSQL sorting using CASE statement

We can do conditional sorting in 2 ways.
1. using dynamic SQL
2. using CASE statements.

1. using Dynamic SQL
DECLARE @sql nvarchar(400), @input int, @paramdef nvarchar(100)
SELECT @input = 2, @paramdef = N'@input int'

set @sql = N'if @input = 1 select * from acctlist order by email ASC; if @input = 2 select * from acctlist order by email DESC'

exec sp_executesql @sql, @paramdef, @input

2. using CASE statements:
select * from authors
order by
CASE WHEN @col = 'firstname' AND @order = 'ASC' THEN firstname END ASC,
CASE WHEN @col = 'firstname' AND @order = 'DESC' THEN firstname END DESC,
CASE WHEN @col = 'email' AND @order = 'ASC' THEN email END ASC,
CASE WHEN @col = 'email' AND @order = 'DESC' THEN email END DESC,
----Default sort by lastname
CASE WHEN @order = 'ASC' THEN lastname END ASC,
CASE WHEN @order = 'DESC' THEN lastname END DESC

kick it on DotNetKicks.com

Nov 25, 2007

Creating ServerTraces using SQL Profiler for background profiling

Follow these steps to create a server trace for either SQL 2000 or SQL 2005.
Server Traces are used for running traces in the background so that SQL server does not have to pump profiling results back to the SQL 2005 Profiler, it can log to a file instead.

Note you can create server trace script files for either the SQL 2000 or SQL 2005 using the SQL 2005 Profiler tool:
  • First use Profiler to define the events, columns, and filters needed. Some Events are : SQL:BatchCompleted and RPC:Completed, SP:StmtCompleted. Important columns are : Duration, CPU, Reads and Writes. Some advanced events are SP:Recompile and Scan:Started to check for table and index scans
  • Click the Run button. Immediately stop the trace
  • Click the File menu, expand the Export option, and then expand the Script Trace Definition option. Choose For SQL Server 2005 (or SQL 2000 if creating script for older SQL Server) and select a filename to save the script.
  • Once the script has been saved, open it for editing in SQL Server Management Studio.
    The following line of the script must be edited, and a valid path must be specified, including a filename:
    exec @rc = sp_trace_create @TraceID output, 0, N'InsertFileNameHere',
    @maxfilesize, NULL. The @maxfilesize is 5MB by default
  • Run the script. The generated script will also select back a @traceID
  • Once you are done use the @traceID to stop and close the trace:

  • EXEC sp_trace_setstatus @traceid=99, @status=0
    EXEC sp_trace_setstatus @traceid=99, @status=2
  • The fn_trace_gettable function can be used to read the data from the trace file :

    SELECT * FROM ::fn_trace_gettable('C:\Traces\myTrace.trc', 999) where 999 is the number of rollover trace files to read

kick it on DotNetKicks.com

Nov 11, 2007

Limitationsof table variables

Refer this linkTable Variables

DECLARE @people TABLE
(
id INT,
name VARCHAR(32)
)

A table variable is created in memory, and so performs slightly better than #temp tables (also because there is even less locking and logging in a table variable). A table variable might still perform I/O to tempdb (which is where the performance issues of #temp tables make themselves apparent), though the documentation is not very explicit about this.

Table variables are automatically cleared when the procedure or function goes out of scope, so you don't have to remember to drop or clear the data (which can be a good thing or a bad thing; remember "release early"?). The tempdb transaction log is less impacted than with #temp tables; table variable log activity is truncated immediately, while #temp table log activity persists until the log hits a checkpoint, is manually truncated, or when the server restarts.

Table variables are the only way you can use DML statements (INSERT, UPDATE, DELETE) on temporary data within a user-defined function. You can create a table variable within a UDF, and modify the data using one of the above statements. For example, you could do this:

CREATE FUNCTION dbo.example1
(
)
RETURNS INT
AS
BEGIN
DECLARE @t1 TABLE (i INT)
INSERT @t1 VALUES(1)
INSERT @t1 VALUES(2)
UPDATE @t1 SET i = i + 5
DELETE @t1 WHERE i < 7

DECLARE @max INT
SELECT @max = MAX(i) FROM @t1
RETURN @max
END
GO

However, try that with a #temp table:

CREATE FUNCTION dbo.example2
(
)
RETURNS INT
AS
BEGIN
CREATE TABLE #t1 (i INT)
INSERT #t1 VALUES(1)
INSERT #t1 VALUES(2)
UPDATE #t1 SET i = i + 5
DELETE #t1 WHERE i < 7

DECLARE @max INT
SELECT @max = MAX(i) FROM #t1
RETURN @max
END
GO

Results:

Server: Msg 2772, Level 16, State 1, Procedure example2, Line 7
Cannot access temporary tables from within a function.

Or try accessing a permanent table:

CREATE TABLE table1
(
id INT IDENTITY,
name VARCHAR(32)
)
GO

CREATE FUNCTION dbo.example3
(
)
RETURNS INT
AS
BEGIN
INSERT table1(name) VALUES('aaron')
RETURN SCOPE_IDENTITY()
END
GO

Results:

Server: Msg 443, Level 16, State 2, Procedure example3, Line 8
Invalid use of 'INSERT' within a function.

Table variables can lead to fewer stored procedure recompilations than temporary tables (see KB #243586 and KB #305977), and — since they cannot be rolled back — do not bother with the transaction log.

So, why not use table variables all the time? Well, when something sounds too good to be true, it probably is. Let's visit some of the limitations of table variables (part of this list was derived from KB #305977):

* Table variables are only allowed in SQL Server 2000+, with compatibility level set to 80 or higher.

* You cannot use a table variable in either of the following situations:

INSERT @table EXEC sp_someProcedure

SELECT * INTO @table FROM someTable

* You cannot truncate a table variable.

* Table variables cannot be altered after they have been declared.

* You cannot explicitly add an index to a table variable, however you can create a system index through a PRIMARY KEY CONSTRAINT, and you can add as many indexes via UNIQUE CONSTRAINTs as you like. What the optimizer does with them is another story. One thing to note is that you cannot explicitly name your constraints, e.g.:

DECLARE @myTable TABLE
(
CPK1 int,
CPK2 int,
CONSTRAINT myPK PRIMARY KEY (CPK1, CPK2)
)

-- yields:
Server: Msg 156, Level 15, State 1, Line 6
Incorrect syntax near the keyword 'CONSTRAINT'.

-- yet the following works:
DECLARE @myTable TABLE
(
CPK1 int,
CPK2 int,
PRIMARY KEY (CPK1, CPK2)
)

* You cannot use a user-defined function (UDF) in a CHECK CONSTRAINT, computed column, or DEFAULT CONSTRAINT.

* You cannot use a user-defined type (UDT) in a column definition.

* Unlike a #temp table, you cannot drop a table variable when it is no longer necessary—you just need to let it go out of scope.

* You cannot generate a table variable's column list dynamically, e.g. you can't do this:

SELECT * INTO @tableVariable

-- yields:

Server: Msg 170, Level 15, State 1, Line 1
Line 1: Incorrect syntax near '@tableVariable'.

You also can't build the table variable inside dynamic SQL, and expect to use it outside that scope, e.g.:

DECLARE @colList VARCHAR(8000), @sql VARCHAR(8000)
SET @colList = 'a INT,b INT,c INT'
SET @sql = 'DECLARE @foo TABLE('+@colList+')'
EXEC(@sql)
INSERT @foo SELECT 1,2,3

-- this last line fails:

Server: Msg 137, Level 15, State 2, Line 5
Must declare the variable '@foo'.

This is because the rest of the script knows nothing about the temporary objects created within the dynamic SQL. Like other local variables, table variables declared inside of a dynamic SQL block (EXEC or sp_executeSQL) cannot be referenced from outside, and vice-versa. So you would have to write the whole set of statements to create and operate on the table variable, and perform it with a single call to EXEC or sp_executeSQL.

* The system will not generate automatic statistics on table variables. Likewise, you cannot manually create statistics (statistics are used to help the optimizer pick the best possible query plan).

* An INSERT into a table variable will not take advantage of parallelism.

* A table variable will always have a cardinality of 1, because the table doesn't exist at compile time.

* Table variables must be referenced by an alias, except in the FROM clause. Consider the following two scripts:

CREATE TABLE #foo(id INT)
DECLARE @foo TABLE(id INT)
INSERT #foo VALUES(1)
INSERT #foo VALUES(2)
INSERT #foo VALUES(3)
INSERT @foo SELECT * FROM #foo

SELECT id
FROM @foo
INNER JOIN #foo
ON @foo.id = #foo.id

DROP TABLE #foo

The above fails with the following error:

Server: Msg 137, Level 15, State 2, Line 11
Must declare the variable '@foo'.

This query, on the other hand, works fine:

SELECT id
FROM @foo f
INNER JOIN #foo
ON f.id = #foo.id


* Table variables are not visible to the calling procedure in the case of nested procs. The following is legal with #temp tables:

CREATE PROCEDURE faq_outer
AS
BEGIN
CREATE TABLE #outer
(
letter CHAR(1)
)

EXEC faq_inner

SELECT letter FROM #outer

DROP TABLE #outer
END
GO

CREATE PROCEDURE faq_inner
AS
BEGIN
INSERT #outer VALUES('a')
END
GO


EXEC faq_outer

Results:

letter
------
a

(1 row(s) affected)

However, you cannot do this with table variables. The parser will find the error before you can even create it:

CREATE PROCEDURE faq_outer
AS
BEGIN
DECLARE @outer TABLE
(
letter CHAR(1)
)

EXEC faq_inner

SELECT letter FROM @outer
END
GO

CREATE PROCEDURE faq_inner
AS
BEGIN
INSERT @outer VALUES('a')
END
GO

Results:

Server: Msg 137, Level 15, State 2, Procedure faq_inner, Line 4
Must declare the variable '@outer'.

For more information about sharing data between stored procedures, please see this article by Erland Sommarskog.

kick it on DotNetKicks.com