Dec 25, 2007

SQL Server (user) instances and AutoAttach DB

Developers often need to attach databases on demand and then start using them immediately. However before developers can start working with DB's they need to first attach the DB, then create a SQL login and then add that login to the DB users list. with SQL Express this is no longer a requirement.

With SQL Express we can use Auto Attach DB's and User instances.
SQL 2005 Express and Full editions both support autoAttachment of DB's, however User instancing is only supported with SQL Express.

To enable AutoAttachment of DB's use : AttachDbFilename=db.mdf in connection string.
When you use "AttachDbFilename=|DataDirectory|db.mdf" the connection string automatically looks for the DB in the AppData folder of the Web Application.

However there is a limitation that if the user is not a administrator, he would still not be able to AutoAttach the DB. we can solve this by using "User instances".
User instance is another SQL server instance and runs under the credentials of the currently logged in user, thus no need to give the user administrative privileges.
==================================================
User Instances is a feature that makes SQL Server 2005 Express different from other SQL Server editions. Before I explain User Instances, you need to understand that a SQL Server instance is essentially an in-memory occurrence of the sqlservr.exe executable program. Different SQL Server editions support different numbers of instances. For example, the SQL Server 2005 Enterprise Edition supports 50 instances, and the SQL Server 2005 Standard, Workgroup, and Express editions each support 16 instances. Each instance runs separately and has its own set of databases that aren't shared by any other instance. Client applications connect to each instance by using the instance name.

Typically, the first SQL Server instance you install becomes the "default" instance. The default instance uses the name of the computer on which it's installed. You can assign a name to subsequent instance installations, so they're called "named" instances. During the installation process, you can assign any name to a named instance. Client applications that want to connect to an instance use the \ convention. For example, if the default instance name is SQLServer1 and the instance name is MyInstance, the client application would connect to the named instance by using the server name SQLServer1\MyInstance.

As with the other SQL Server editions, SQL Server Express supports the default instance and named instances, but SQL Server Express uses SQLExpress as the default instance name rather than the name of the computer system.

In addition to regular SQL Server instances, SQL Server Express also supports User Instances. User instances are similar to named instances, but SQL Server Express creates user instances dynamically, and these instances have different limitations. When you install SQL Server Express, you have the option of enabling User Instances. By default, User Instances aren't enabled. After installation, you can enter the sp_configure command in SQL Server Management Studio Express (SSMSE) or the sqlcmd tool by using the following syntax:

sp_configure 'user instances enabled','1'

To disable User Instance support, replace 1 with a 0 in the sp_configure command.

User Instances were designed to make deploying databases along with applications easier. User Instances let users create a database instance on demand even if they don't have administrative rights. To utilize User Instances, the application's connection string needs to use the attachdbfilename and user instance keywords as follows:

Data Source=.\SQLExpress;integrated security=true;
attachdbfilename=\MyDatabase.mdf;user instance=true;"

When an application opens a connection to a SQL Server Express database in which User Instances are enabled and the application uses the attachdbfilename and user instance keywords, SQL Server Express copies the master and msdb databases to the user's directory. SQL Server Express starts a new instance of the sqlserver.exe program and SQL Server Express attaches the database named in the attachdbfilename keyword to the new instance.

Unlike common SQL Server instances, SQL Server Express User Instances have some limitations. User Instances don't allow network connections, only local connections. As you might expect with the network-access restriction, User Instances don't support replication or distributed queries to remote databases. In addition, Windows integrated authentication is required. For more information about SQL Server Express and User Instances you can read the Microsoft article "SQL Server 2005 Express Edition User Instances" at
http://msdn.microsoft.com/sql/express/default.aspx?pull=/library/en-us/dnsse/html/sqlexpuserinst.asp

kick it on DotNetKicks.com

Dec 9, 2007

SQL Tips

Difference between truncate and delete
  • delete operations are logged and thus can be restored using transaction log backups (if not using Simple recovery model - the default)
  • truncate operations are not logged, thus cannot be rolled back.
  • delete can be used with WHERE clause to selectively delete rows
  • truncate cannot selectively delete rows.
  • truncate reseeds the identity column , delete does not reseed
  • truncate cannot be used on tables with foreign key constraints
  • since truncate is not logged, it does not activate triggers, delete does activate triggers
=====================================================
Difference between temp table and table variables:
  • table variables are not logged, i.e. they don't use transaction logs, thus cannot participate in transactions
  • indexes cannot be created on table variables
  • foreign key constraints cannot be created either on table variables or temp tables
  • table variables cannot be nested, i.e. they cannot be reused within the nested subquery, whereas temp tables can be accessed from within nested subqueries
  • SP with table variables require fewer recompilation than SP with temp tables.
  • table variables dont make use of parallelism since they reside in memory, temp table do make use of multiple processors since they reside in tempdb.
  • table variables used for small amounts of data since they reside in memory.
=====================================================
Types of Data Integrity
Entity Integrity: Primary Key and Unique key constraints are used to maintain entity integrity
Domain Integrity : Check and Not NULL constraints are used to maintain domain integrity
Referential integrity : Foreign key constraints are used to maintain referential integrity
=====================================================
Local Temp tables & variables : exist for the duration of session, thus not shared by different client sessions. If Dynamic sql creates the local temp table then it falls out of scope as soon as it exits the EXEC statement used to execute the dynamic SQL.
Global Temp tables & variables : shared by all client sessions and are destroyed when the last client session exits
===========================================================
Types of UDFs:
Scalar valued udf : returns a single value, can be used anywhere an expression is used, just like subqueries.
table valued udf : returns a table.
2 types of table valued udf's :
  • inline table valued udf : uses a single statement to return a table
  • multi-statement table valued udf : multiple statements are issued in the udf before returning the table
UDF's can be used within select/having/where clauses and table valued udf's return rowsets/table and can be used in joins.
Cannot use temp tables in UDF's, only table variables.
UDFs are analogous to views that accept input parameters
================================================
Indexed Views are views that use disk space to store data.
Created with SCHEMABINDING option
have restrictions on the columns that the Indexed view can contain
have restrictions on the columns that can be used within the index created for the view
represent views that have a unique clustered index
Must have ansi_nulls, quoted_identifier, arithabort etc session options set to ON
If using standard edition than cannot be used implicitly by queries, must use the NOEXPAND optimizer HINT or reference with view with the VIEW name.
If using Group by then count_big(*) must be used
Must not be used in OLTP environments, suitable for less frequently updated base tables.

============================================================
ANSI_NULLS : default OFF, when ON then returns 0 rows when col = null or col <> null used even though the column has both null and not null values.
Thus we need to insure that we use the col is null, col is not null syntax for comparisons

ANSI_WARNINGS : default OFF: when ON then shows warnings when one of the following situations arises:
sum, max, min, avg used against a column that contains nulls
Divide by Zero exception occurs
Overflow exception occurs (string may be truncated)

QUOTED_IDENTIFIER : default OFF : when ON then allows object identifiers to use double quotes for distinguishing from reserved words and allows string literals to use single quotes

Indexed views need all the above sessions options to be set to ON

Concat_NULL_YIELDS NULL : default OFF

ANSI_DEFAULTS : default OFF: When ON then sets the above and the following session options to ON:
ANSI_PADDING

kick it on DotNetKicks.com

Dec 5, 2007

More on MSMQ

Recoverable messages:

MSMQ messages are stored in memory by default, hence there is a possibility of loosing them if the appdomain exits or the application crashes.
To create messages that get persisted to disk we use Recoverable messages.

Method 1:
Set the Recoverable property on the Message itself before sending it.

Message recoverableMessage = new Message();

recoverableMessage.Body = "Sample Recoverable Message";

recoverableMessage.Recoverable = true;


Method 2:
Use the "DefaultPropertiesToSend.Recoverable" property on MessageQueue itself to mark all messages sent via this queue as recoverable:

MessageQueue msgQ = new MessageQueue(@".\private$\Orders");

msgQ.DefaultPropertiesToSend.Recoverable = true;

msgQ.Send("This message will be marked as Recoverable");

=====================================================
Transactional Messages (Internal transactions)

MessageQueueTransaction msgTx = new MessageQueueTransaction();

MessageQueue msgQ = new MessageQueue(@".\private$\Orders");

msgTx.Begin();

try

{

msgQ.Send("First Message", msgTx);

msgQ.Send("Second Message", msgTx);

msgTx.Commit();

msgTx.Begin();

Message msg = msgQ.Receive(msgTx);

msgTx.Commit();

}

catch

{

msgTx.Abort();

}

finally

{

msgQ.Close();

}

================================================================

External Transactions (using MSDTC and COM+)

====================================================================

using System.EnterpriseServices;

using System.Messaging;

[assembly: ApplicationName("MSMQ_Example")]

[assembly: ApplicationActivation(ActivationOption.Server)]

[Transaction(TransactionOption.Required)]

public class Message : ServicedComponent

{

[AutoComplete()]

public void DoTransaction()

{

//Create the queue

MessageQueue msgQ = new MessageQueue(@".\private$\Orders");

Order incomingOrder;

//Configure the formatter with the expected type(s)

Type[] targetTypes = new Type[1];

targetTypes[0] = typeof(Order);

msgQ.Formatter = new XmlMessageFormatter(targetTypes);

//DB Connection setup code removed

SqlConnection sqlConn = new SqlConnection(connectionString);

SqlCommand insertLog = new SqlCommand(sqlString, sqlConn);

//Command setup code removed

try

{

//Receive new message, casting the result to the Order type

incomingOrder = (Order)(msgQ.Receive(new TimeSpan(0, 0, 30)).Body);

//Use ID from message to setup parameters for database insert

orderIDParam.Value = incomingOrder.ID;

receiveParam.Value = DateTime.Now;

insertLog.Parameters.Add(orderIDParam);

insertLog.Parameters.Add(receiveParam);

sqlConn.Open();

//Insert a new entry into the Log table

insertLog.ExecuteNonQuery();

//Mark the transaction as completing successfully... can be done using ContextUtil.SetComplete

}

catch (Exception ex)

{

//Remember to throw the exception, only then will AutoComplete() abort the transaction

// transaction can be aborted using ContextUtil.Abort if AutoComplete() atribute is not used

throw ex;

}

finally

{

//Always close the connection, regardless of success/failure

sqlConn.Close();

}

}

}


After you create the serviced component, it must be strongly named and then deployed to the GAC so as to work correctly as a COM+ application.


After the component has been installed into the COM+ application, clients can create and call your component just like any other .NET class, and your component will automatically run inside COM+ and act as part of a transaction.

kick it on DotNetKicks.com

MSMQ Primer

Creating Queues and Managing Queues is a breeze with .NET
Queuenames in .NET:

public queues : .\qname
private : .\private$\qname
journal: .\qname\journal$



Create queues using the MessageQueue class and specify if you need to use Journal queues with the message queue. the Journal queues hold the messages after that have been "received" from the main message queue.
The third type of queues are the AdministrationQueues /Acnowledgement queues and we use the Message.RecieveByCorrelationId method or PeekByCorrelationId to receive admin/ack messages from these queues after a message has been received from the main message queue.
the PeekByCorrelationId method takes in the message id of the message in the main message queue.

public class QueueMgmt

{

public static MessageQueue CreateQueue(string path)

{

MessageQueue mq = null;

if (MessageQueue.Exists(path))

{

mq = new MessageQueue(path);

Console.WriteLine("Connected to existing queue " + path);

}

else

{

mq = MessageQueue.Create(path);

Console.WriteLine("Created to new queue " + path);

}

Console.WriteLine("Message Queue format is " + mq.FormatName);

return mq;

}

}

=========================================================

private void btnCreateQueue_Click(object sender, EventArgs e)

{

// Create a public queue and configure it for journaling.

using (MessageQueue mq = QueueMgmt.CreateQueue(@".\DotNet"))

{

mq.Label = "DotNet Message Queue";

mq.UseJournalQueue = true;

} // mq disposed.

// Create a private queue for acknowledgments.

using (MessageQueue mq = QueueMgmt.CreateQueue(@".\Private$\DotNetAdmin"))

{

mq.Label = "DotNet Acknowledge Queue";

} // mq disposed.

}


=======================================
After creating the queues, you can send messages to the queue using mq.Send method, note we can send any XML Serializable types in the message body. Remember to set the mq.Formatter.TargetTypes property with the TYPES that it is supposed to deserialize when receiving them.

Note we are also setting the Administration queue which will receive the acknowledgment messages once they are read from the Queue.

private void btnSendMessage_Click(object sender, EventArgs e)

{

using (MessageQueue mq = new MessageQueue(@".\dotnet"))

{

using (System.Messaging.Message msg = new System.Messaging.Message())

{

msg.Label = "dotnetmessage";

msg.Body = "Message : " + DateTime.Now.ToString();

msg.AdministrationQueue = new MessageQueue(@".\private$\dotnetadmin");

msg.AcknowledgeType = AcknowledgeTypes.FullReceive;

mq.Send(msg);

Console.WriteLine("Message sent with ID: " + msg.Id);

}

}

}


==============================================
Once the message is sent it arrives in the public or the private queue where it was sent.
Then while receiving the messages we read the message from the public/private queue, once it has been received a copy is stored in the Journal queue if one has been setup.
Also admin messages are sent to the Admin queue depending on the AcknowledgementTypes setup during the sending of the message.
Note we can get mutiple ack/admin messages for a single message by ORing different ackTypes in the message.AcknowledgeType enumneration.

private void btnReceive_Click(object sender, EventArgs e)

{

Type[] targetTypes = { typeof(string) };

string msgId = string.Empty;

using (MessageQueue mq = new MessageQueue(@".\dotnet"))

{

((XmlMessageFormatter)mq.Formatter).TargetTypes = targetTypes;

using (System.Messaging.Message msg = mq.Receive())

{

Console.WriteLine("Received Message ID {0} : \n\t Body = {1}", msg.Id, msg.Body);

msgId = msg.Id;

}

}

using (MessageQueue mq = new MessageQueue(@".\dotnet\Journal$"))

{

((XmlMessageFormatter)mq.Formatter).TargetTypes = targetTypes;

using (System.Messaging.Message msg = mq.PeekById(msgId))

{

Console.WriteLine("Peeked message from Journal Queue {0}:\n\t Body = {1}", msg.Id, msg.Body);

}

}

using (MessageQueue mq = new MessageQueue(@".\private$\dotnetadmin"))

{

using (System.Messaging.Message msg = mq.PeekByCorrelationId(msgId))

{

Console.WriteLine("Peeked Acknowledgement queue {0} : \n\t Acknowledgement is {1}", msg.Id, msg.Acknowledgment);

}

}

}

kick it on DotNetKicks.com

Dec 1, 2007

DBCC Commands

Some useful DBCC commands are:

DBCC CHECKIDENT ('authors',RESEED,0)

DBCC DROPCLEANBUFFERS --- Removes all clean buffers from the buffer pool
DBCC FREEPROCCACHE -- this clears the proc cache on all db's on the server.
DBCC FLUSHPROCINDB ---- clears the cache for a paricular DB


You can create PROXY users in SQL 2005 using 2 methods
1 CREATE USER greg WITHOUT LOGIN

2. create certificate al_cert with subject='certificate for al'
create user al FOR CERTIFICATE al_cert.

kick it on DotNetKicks.com

Nov 26, 2007

Conditional TSQL sorting using CASE statement

We can do conditional sorting in 2 ways.
1. using dynamic SQL
2. using CASE statements.

1. using Dynamic SQL
DECLARE @sql nvarchar(400), @input int, @paramdef nvarchar(100)
SELECT @input = 2, @paramdef = N'@input int'

set @sql = N'if @input = 1 select * from acctlist order by email ASC; if @input = 2 select * from acctlist order by email DESC'

exec sp_executesql @sql, @paramdef, @input

2. using CASE statements:
select * from authors
order by
CASE WHEN @col = 'firstname' AND @order = 'ASC' THEN firstname END ASC,
CASE WHEN @col = 'firstname' AND @order = 'DESC' THEN firstname END DESC,
CASE WHEN @col = 'email' AND @order = 'ASC' THEN email END ASC,
CASE WHEN @col = 'email' AND @order = 'DESC' THEN email END DESC,
----Default sort by lastname
CASE WHEN @order = 'ASC' THEN lastname END ASC,
CASE WHEN @order = 'DESC' THEN lastname END DESC

kick it on DotNetKicks.com

Nov 25, 2007

Creating ServerTraces using SQL Profiler for background profiling

Follow these steps to create a server trace for either SQL 2000 or SQL 2005.
Server Traces are used for running traces in the background so that SQL server does not have to pump profiling results back to the SQL 2005 Profiler, it can log to a file instead.

Note you can create server trace script files for either the SQL 2000 or SQL 2005 using the SQL 2005 Profiler tool:
  • First use Profiler to define the events, columns, and filters needed. Some Events are : SQL:BatchCompleted and RPC:Completed, SP:StmtCompleted. Important columns are : Duration, CPU, Reads and Writes. Some advanced events are SP:Recompile and Scan:Started to check for table and index scans
  • Click the Run button. Immediately stop the trace
  • Click the File menu, expand the Export option, and then expand the Script Trace Definition option. Choose For SQL Server 2005 (or SQL 2000 if creating script for older SQL Server) and select a filename to save the script.
  • Once the script has been saved, open it for editing in SQL Server Management Studio.
    The following line of the script must be edited, and a valid path must be specified, including a filename:
    exec @rc = sp_trace_create @TraceID output, 0, N'InsertFileNameHere',
    @maxfilesize, NULL. The @maxfilesize is 5MB by default
  • Run the script. The generated script will also select back a @traceID
  • Once you are done use the @traceID to stop and close the trace:

  • EXEC sp_trace_setstatus @traceid=99, @status=0
    EXEC sp_trace_setstatus @traceid=99, @status=2
  • The fn_trace_gettable function can be used to read the data from the trace file :

    SELECT * FROM ::fn_trace_gettable('C:\Traces\myTrace.trc', 999) where 999 is the number of rollover trace files to read

kick it on DotNetKicks.com

Nov 11, 2007

Limitationsof table variables

Refer this linkTable Variables

DECLARE @people TABLE
(
id INT,
name VARCHAR(32)
)

A table variable is created in memory, and so performs slightly better than #temp tables (also because there is even less locking and logging in a table variable). A table variable might still perform I/O to tempdb (which is where the performance issues of #temp tables make themselves apparent), though the documentation is not very explicit about this.

Table variables are automatically cleared when the procedure or function goes out of scope, so you don't have to remember to drop or clear the data (which can be a good thing or a bad thing; remember "release early"?). The tempdb transaction log is less impacted than with #temp tables; table variable log activity is truncated immediately, while #temp table log activity persists until the log hits a checkpoint, is manually truncated, or when the server restarts.

Table variables are the only way you can use DML statements (INSERT, UPDATE, DELETE) on temporary data within a user-defined function. You can create a table variable within a UDF, and modify the data using one of the above statements. For example, you could do this:

CREATE FUNCTION dbo.example1
(
)
RETURNS INT
AS
BEGIN
DECLARE @t1 TABLE (i INT)
INSERT @t1 VALUES(1)
INSERT @t1 VALUES(2)
UPDATE @t1 SET i = i + 5
DELETE @t1 WHERE i < 7

DECLARE @max INT
SELECT @max = MAX(i) FROM @t1
RETURN @max
END
GO

However, try that with a #temp table:

CREATE FUNCTION dbo.example2
(
)
RETURNS INT
AS
BEGIN
CREATE TABLE #t1 (i INT)
INSERT #t1 VALUES(1)
INSERT #t1 VALUES(2)
UPDATE #t1 SET i = i + 5
DELETE #t1 WHERE i < 7

DECLARE @max INT
SELECT @max = MAX(i) FROM #t1
RETURN @max
END
GO

Results:

Server: Msg 2772, Level 16, State 1, Procedure example2, Line 7
Cannot access temporary tables from within a function.

Or try accessing a permanent table:

CREATE TABLE table1
(
id INT IDENTITY,
name VARCHAR(32)
)
GO

CREATE FUNCTION dbo.example3
(
)
RETURNS INT
AS
BEGIN
INSERT table1(name) VALUES('aaron')
RETURN SCOPE_IDENTITY()
END
GO

Results:

Server: Msg 443, Level 16, State 2, Procedure example3, Line 8
Invalid use of 'INSERT' within a function.

Table variables can lead to fewer stored procedure recompilations than temporary tables (see KB #243586 and KB #305977), and — since they cannot be rolled back — do not bother with the transaction log.

So, why not use table variables all the time? Well, when something sounds too good to be true, it probably is. Let's visit some of the limitations of table variables (part of this list was derived from KB #305977):

* Table variables are only allowed in SQL Server 2000+, with compatibility level set to 80 or higher.

* You cannot use a table variable in either of the following situations:

INSERT @table EXEC sp_someProcedure

SELECT * INTO @table FROM someTable

* You cannot truncate a table variable.

* Table variables cannot be altered after they have been declared.

* You cannot explicitly add an index to a table variable, however you can create a system index through a PRIMARY KEY CONSTRAINT, and you can add as many indexes via UNIQUE CONSTRAINTs as you like. What the optimizer does with them is another story. One thing to note is that you cannot explicitly name your constraints, e.g.:

DECLARE @myTable TABLE
(
CPK1 int,
CPK2 int,
CONSTRAINT myPK PRIMARY KEY (CPK1, CPK2)
)

-- yields:
Server: Msg 156, Level 15, State 1, Line 6
Incorrect syntax near the keyword 'CONSTRAINT'.

-- yet the following works:
DECLARE @myTable TABLE
(
CPK1 int,
CPK2 int,
PRIMARY KEY (CPK1, CPK2)
)

* You cannot use a user-defined function (UDF) in a CHECK CONSTRAINT, computed column, or DEFAULT CONSTRAINT.

* You cannot use a user-defined type (UDT) in a column definition.

* Unlike a #temp table, you cannot drop a table variable when it is no longer necessary—you just need to let it go out of scope.

* You cannot generate a table variable's column list dynamically, e.g. you can't do this:

SELECT * INTO @tableVariable

-- yields:

Server: Msg 170, Level 15, State 1, Line 1
Line 1: Incorrect syntax near '@tableVariable'.

You also can't build the table variable inside dynamic SQL, and expect to use it outside that scope, e.g.:

DECLARE @colList VARCHAR(8000), @sql VARCHAR(8000)
SET @colList = 'a INT,b INT,c INT'
SET @sql = 'DECLARE @foo TABLE('+@colList+')'
EXEC(@sql)
INSERT @foo SELECT 1,2,3

-- this last line fails:

Server: Msg 137, Level 15, State 2, Line 5
Must declare the variable '@foo'.

This is because the rest of the script knows nothing about the temporary objects created within the dynamic SQL. Like other local variables, table variables declared inside of a dynamic SQL block (EXEC or sp_executeSQL) cannot be referenced from outside, and vice-versa. So you would have to write the whole set of statements to create and operate on the table variable, and perform it with a single call to EXEC or sp_executeSQL.

* The system will not generate automatic statistics on table variables. Likewise, you cannot manually create statistics (statistics are used to help the optimizer pick the best possible query plan).

* An INSERT into a table variable will not take advantage of parallelism.

* A table variable will always have a cardinality of 1, because the table doesn't exist at compile time.

* Table variables must be referenced by an alias, except in the FROM clause. Consider the following two scripts:

CREATE TABLE #foo(id INT)
DECLARE @foo TABLE(id INT)
INSERT #foo VALUES(1)
INSERT #foo VALUES(2)
INSERT #foo VALUES(3)
INSERT @foo SELECT * FROM #foo

SELECT id
FROM @foo
INNER JOIN #foo
ON @foo.id = #foo.id

DROP TABLE #foo

The above fails with the following error:

Server: Msg 137, Level 15, State 2, Line 11
Must declare the variable '@foo'.

This query, on the other hand, works fine:

SELECT id
FROM @foo f
INNER JOIN #foo
ON f.id = #foo.id


* Table variables are not visible to the calling procedure in the case of nested procs. The following is legal with #temp tables:

CREATE PROCEDURE faq_outer
AS
BEGIN
CREATE TABLE #outer
(
letter CHAR(1)
)

EXEC faq_inner

SELECT letter FROM #outer

DROP TABLE #outer
END
GO

CREATE PROCEDURE faq_inner
AS
BEGIN
INSERT #outer VALUES('a')
END
GO


EXEC faq_outer

Results:

letter
------
a

(1 row(s) affected)

However, you cannot do this with table variables. The parser will find the error before you can even create it:

CREATE PROCEDURE faq_outer
AS
BEGIN
DECLARE @outer TABLE
(
letter CHAR(1)
)

EXEC faq_inner

SELECT letter FROM @outer
END
GO

CREATE PROCEDURE faq_inner
AS
BEGIN
INSERT @outer VALUES('a')
END
GO

Results:

Server: Msg 137, Level 15, State 2, Procedure faq_inner, Line 4
Must declare the variable '@outer'.

For more information about sharing data between stored procedures, please see this article by Erland Sommarskog.

kick it on DotNetKicks.com

Oct 31, 2007

TSQL: Parsing delimited string into table

CREATE FUNCTION dbo.udf_ItemParse (
@Input VARCHAR(8000), @Delimeter char(1)='|'
)
RETURNS @ItemList TABLE (
Item VARCHAR(50) ,
Pos int
)
AS
BEGIN

DECLARE @Item varchar(50)
DECLARE @StartPos int, @Length int
DECLARE @Pos int
SET @Pos = 0
WHILE LEN(@Input) > 0
BEGIN
SET @StartPos = CHARINDEX(@Delimeter, @Input)
IF @StartPos < 0 SET @StartPos = 0
SET @Length = LEN(@Input) - @StartPos - 1
IF @Length < 0 SET @Length = 0
IF @StartPos > 0
BEGIN
SET @Pos = @Pos + 1
SET @Item = SUBSTRING(@Input, 1, @StartPos - 1)
SET @Input = SUBSTRING(@Input, @StartPos + 1, LEN(@Input) - @StartPos)
END
ELSE
BEGIN
SET @Pos = @Pos+1
SET @Item = @Input
SET @Input = ''
END
INSERT @ItemList (Item, Pos) VALUES(@Item, @Pos)
END
RETURN
END

kick it on DotNetKicks.com

Oct 26, 2007

SoapExtension for Logging Soap Exceptions

using System;
using System.Web;
using System.Web.Services;
using System.Web.Services.Protocols;
using System.IO;
using System.Xml;
using MIT.BizServices.MCUtil.Misc;

namespace MIT.Common.ExceptionHandlingSoapExtension
{
//
// SOAP extension to transfer full exception information from
// server to client in the event of an exception being thrown
// from a webmethod
//

//
// In the event of an exception being thrown during the execution
// of a webmethod on the server, this class steps in and writes detailed
// exception information to the output stream.
//
// Client side, if an exception is detected on the response from a
// webmethod call, this class retrieves the detailed exception
// information from the input stream and throws an exception
// containing the detailed information
//

public class ExceptionHandlingSoapExtension : SoapExtension
{

region "Initialisation methods - not used because this class has no state to maintain"
public override object GetInitializer(System.Type serviceType) {
return null;
}

public override object GetInitializer(System.Web.Services.Protocols.LogicalMethodInfo methodInfo, System.Web.Services.Protocols.SoapExtensionAttribute attribute) {
return null;
}

public override void Initialize(object initializer)
{
}
endregion

region "Stream chaining code"
private Stream oldStream;
private Stream newStream;

public override Stream ChainStream(Stream stream) {
oldStream = stream;
newStream = new MemoryStream();
return newStream;
}

//
// Copies the contents of one stream to another
//

private void StreamCopy(Stream source, Stream dest) {
StreamReader Reader = new StreamReader(source);
StreamWriter Writer = new StreamWriter(dest);
Writer.WriteLine(Reader.ReadToEnd());
Writer.Flush();
}
endregion

public override void ProcessMessage(System.Web.Services.Protocols.SoapMessage message)
{
switch (message.Stage)
{
case SoapMessageStage.BeforeSerialize:
return;

case SoapMessageStage.AfterSerialize:
//If exception present in message, write details
//to the new stream
if (message.Exception != null)
{
InsertExceptionDetails(message.Exception);
}

//Copy new stream to old stream
newStream.Position = 0;
StreamCopy(newStream, oldStream);
return;

case SoapMessageStage.BeforeDeserialize:
//Copy old stream to new stream
StreamCopy(oldStream, newStream);
newStream.Position = 0;
return;

case SoapMessageStage.AfterDeserialize:
//If exception present in message,
//get details from stream and throw to caller
if (message.Exception != null)
{
CheckExceptionDetails();
}
return;

default:
throw new ArgumentException("Invalid message stage");

}
}

//
// Insert details of the specified exception into the output stream

// <param name="ex">Exception to write details for
private void InsertExceptionDetails(Exception ex) {
//Read output stream into XML document
newStream.Position = 0;
XmlTextReader Reader = new XmlTextReader(newStream);
XmlDocument MessageDoc = new XmlDocument();
MessageDoc.Load(Reader);

XmlNamespaceManager NsMgr = new XmlNamespaceManager(MessageDoc.NameTable);
NsMgr.AddNamespace("soap", "http://schemas.xmlsoap.org/soap/envelope/");

//Construct string describing exception
string ErrorInfo;
if (ex.InnerException != null)
{
ErrorInfo = ex.InnerException.Message;
}
else
{
ErrorInfo = ex.Message;
}
//log exception to flat file
Helper.LogException(string.Empty, ex);

//Find existing soap:Fault node describing exception
XmlNode ExceptionNode;
ExceptionNode = MessageDoc.SelectSingleNode("//soap:Fault", NsMgr);

//Add extended exception detail node to Fault node
XmlElement ExceptionDetail;
ExceptionDetail = MessageDoc.CreateElement("ExtendedExceptionDetails");

ExceptionDetail.InnerText = ErrorInfo;

ExceptionNode.AppendChild(ExceptionDetail);

//Write XML document back to output stream
newStream = new MemoryStream();
MessageDoc.Save(newStream);
}

//
// Reads extra exception information from stream

// Details of any exception detail found in the input stream
private void CheckExceptionDetails()
{
//Read input stream into XML document
newStream.Position = 0;
XmlTextReader Reader = new XmlTextReader(newStream);
XmlDocument MessageDoc = new XmlDocument();
MessageDoc.Load(Reader);

XmlNamespaceManager NsMgr = new XmlNamespaceManager(MessageDoc.NameTable);
NsMgr.AddNamespace("soap", "http://schemas.xmlsoap.org/soap/envelope/");

//Find extended exception detail node
XmlNode ExceptionDetailNode;
ExceptionDetailNode = MessageDoc.SelectSingleNode("//soap:Fault/ExtendedExceptionDetails", NsMgr);
XmlNode exception = MessageDoc.SelectSingleNode("//soap:Fault/faultstring", NsMgr);

//Return detail text if found, empty string otherwise
if (ExceptionDetailNode != null)
{
if (exception != null)
{
if (exception.InnerText.Contains("WebServiceException"))
{
throw new WebServiceException(ExceptionDetailNode.InnerText);
}
else
{
throw new Exception(ExceptionDetailNode.InnerText);
}
}
}
}
}
}

kick it on DotNetKicks.com

Exception and Logging Application Block

Add reference to the following DLL's:
Microsoft.Practices.EnterpriseLibrary.Common.dll

Microsoft.Practices.EnterpriseLibrary.Logging
If you intend to log information to a database, you also need to reference the Microsoft.Practices.EnterpriseLibrary.Data.dll















For adding tracing in the method call use the following:
public DataSet GetPricingDataSet(string loanNumber)
{
using Tracer("Trace")
{
DoSomething();
//For additional logging use:
Logger.Write("message")
//or use:
Logger.Write(new LogEntry("message"))
}
}

Add a Trace listener in the Logging Application Block via the Enterprise Library Configuration tool and then specify a Category called "Trace" which logs to this newly configured Trace Listener.

For more info:
http://www.devx.com/dotnet/Article/31463/1954?pf=true

http://www.devx.com/dotnet/Article/35736?trk=DXRSS_DOTNET
http://www.devx.com/dotnet/Article/36184?trk=DXRSS_DOTNET - Enterprise Library 3.0
======================================================
For Exception Handling:
======================================================
Add references to the following DLL's:
ExceptionHandling.dll
ExceptionHandling.Logging.dll

public static void LogException(string policyName, Exception ex)
{
if (string.IsNullOrEmpty(policyName))
policyName = ConfigurationManager.AppSettings["ExceptionPolicy"];

ExceptionPolicy.HandleException(ex, policyName);
}


kick it on DotNetKicks.com

Oct 3, 2007

TSQL to concatenate column values into a single string

DECLARE @cols AS varchar(3000)
;
WITH YearsCTE
AS
(SELECT DISTINCT YEAR(OrderDate) as [Year] FROM Sales.SalesOrderHeader)

SELECT @cols = ISNULL(@cols + ',[', '[') + CAST([YEAR] AS varchar(4)) + ']'
FROM YearsCTE
ORDER BY [YEAR]

Note the use of ISNULL. This adds a additional "," to separate records. Note the isnull helps to add the additonal "," only to records numbered 2 or greater.

kick it on DotNetKicks.com

Sep 23, 2007

Create and deploy a Sharepoint Feature

Suppose you need to replace the SmallSearchInputBox on the sharepoint site with a new feature. Follow these steps:

step1 : create a copy of existing feature e.g. controlLightUp
Thus copy this folder from 12\Template\Features\CustomLightUp to 12\Template\Features\NewSearchArea

Step2: modify feature.xml to contain the new GUID and add a reference to the element.xml (within a subdirectory or the same directory as feature.xml, whichever you prefer) which describes the feature.
Specify if the feature should be HIDDEN or displayed in the Sharepoint site settings
Specify Title and Description if this feature is NOT Hidden

Step 3: then modify the elements.xml file in controls folder to have the correct settings. Scope can take one of the following values: FARM, Webapplication, Web and Site
Site is SiteCollection
Web is Site
Web Application is parent of Sitecollection
FARM is server wide enabling of the feature

Step4: suppose CustomLightUp feature is installed at FARM level snd it is referenced through a delagate control in the master page and its control id "SmallSearchInputBox", and you want to create a new feature at the Site scope and for the same delegate control
Then there are 2 ways to achioeve this

1: rename the Delegate control id from "SmallSearchInput" to "newSearch" then then make the modification in elements.xml to contain this new control id. then install the feature at Site level using stsadm utility

2. you can keep the control id the same i.e. "SmallSearchInput" but then modify the Sequence value in elements.xml to have a value less than 100.
(for more on this refer to my previous blog on delegate control) Then install the feature at the site level using stsadm
command is stsadm -o installfeature -filename NewSearchArea\feature.xml
stsadm -o activatefeature -filename NewSearch\feature.xml -url http://w.c.com

For automatic deployment of the feature read this:
http://www.codeplex.com/wspbuilder
Or
http://msdn.microsoft.com/msdnmag/issues/07/05/OfficeSpace/
and
http://msdn.microsoft.com/msdnmag/issues/07/08/OfficeSpace/

if you want to list all existing features then you need to extend the stsadm utility:
http://sharepointsolutions.blogspot.com/2006/09/extending-stsadmexe-with-custom.html
and
http://msdn2.microsoft.com/en-us/library/bb417382.aspx
and
http://www.andrewconnell.com/blog/articles/MossStsadmWcmCommands.aspx
and
http://stsadm.blogspot.com/2007/08/enumerate-features.html

kick it on DotNetKicks.com

Sep 21, 2007

Automating Dev, QA, Staging, and Production Web.Config Settings with VS 2005

One of the questions I get asked fairly regularly is: "how can I can easily change different configuration settings in my web.config file based on whether my application is in a dev, qa, staging or production mode?" The most common scenario for this is one where an application uses different database connection-strings for testing and production purposes.

It turns out you can easily automate this configuration process within the Visual Studio build environment (and do so in a way that works both within the IDE, as well as with command-line/automated builds). Below are the high-level steps you take to do this. They work with both VS 2005 and VS 2008.

  1. Use ASP.NET Web Application Projects (which have MSBuild based project files)
  2. Open the VS Configuration Manager and create new "Dev", "QA", "Staging" build configurations for your project and solution
  3. Add new "web.config.dev", "web.config.qa", and "web.config.staging" files in your project and customize them to contain the app's mode specific configuration settings
  4. Add a new "pre-build event" command to your project file that can automatically copy over the web.config file in your project with the appropriate mode specific version each time you build the project (for example: if your solution was in the "Dev" configuration, it would copy the web.config.dev settings to the main web.config file).

Once you follow these steps, you can then just pick the mode your solution is in using the configuration drop-down in the VS standard toolbar:

The next time you build/run after changing the configuration mode, VS will automatically modify your application's web.config file to pick up and use the web.config settings specific to that build configuration (so if you select QA it will use the QA settings, if you select Deploy it will use the Deploy settings).

The benefit with this approach is that it works well in a source control environment (everyone can sync and build locally without having to make any manual changes on their local machines). It also works on a build server - including with scenarios where you are doing automated command-line solution builds.

To learn more about the exact steps to set this up, please read the Managing Multiple Configuration File Environments with Pre-Build Events post that Scott Hanselman published earlier tonight. Also check out ScottGu's ASP.NET Tips, Tricks, and Gotchas page for other ASP.NET Tips/Tricks recommendations.

kick it on DotNetKicks.com

Sep 18, 2007

multi column sorting of a typed collection

Use the following One line code for multi-column or single column sorting of a typed collection:

List<Liability> liabList = new List<Liability>();

liabList.Sort(delegate(Liability l1, Liability l2)
{ int r = l1.Type.CompareTo(l2.Type);
if (r == 0 && l1.HolderName != null)
r = l1.HolderName.CompareTo(l2.HolderName);
if (r == 0)
r = l1.Amount.CompareTo(l2.Amount);
return r;
} );

kick it on DotNetKicks.com

Jun 27, 2007

Nant Intellisense with Nant Contrib tasks and VS2005 nant Addin

Nant Intellisense with Nant Contrib tasks and VS2005 nant Addin.

First task is to generate the XSD (schema) file that needs to be added inorder to enable intellisense.
use the following Nant .build script to generate the nant.xsd file:

<project name="nant" default="go">
<property name="NAntContrib" value="c:\rohit\nantcontrib-0.85\bin" />
<target name="go">
<loadtasks assembly="${NAntContrib}\NAnt.Contrib.Tasks.dll" />
<nantschema output="nant.xsd" target-ns="http://nant.sf.net/nightly/2006-03-08-0.85/nant.xsd"/>
</target>
</project>

Build this script using nant:
nant /f:nantxsd.build
This will create the nant.xsd file.

Copy this file nant.xsd to 'C:\Program Files\Microsoft Visual Studio .NET 2003\Common7\Packages\schemas\xml' on your system, or if you are using VS2005 then copy to "C:\Program Files\Microsoft Visual Studio 8\Xml\Schemas".

In your NAnt build file (I make my with the extension .build) include this xmlns attribute:
<project xmlns="http://nant.sf.net/nightly/2006-03-08-0.85/nant.xsd" name="test" default="build">

Now you will get intellisense for all nant tasks as well nant contrib tasks.
==============================================================
To add the Nant Addin for VS2005.
Download Nant addin from:
http://sourceforge.net/projects/nantaddin

Create a directory for Visual Studio Addins ex : "%Program Files%/Microsoft Visual Studio 8/Addins" Copy the NAntAddin directory along with the "NAntAddin.AddIn" file into it.

open VS2005. click Tools\Options.
Click on Envrironment\"Add-in/Macros Security"
Then click "Add" to add the "%Program Files%/Microsoft Visual Studio 8/Addins" directory.

Restart VS 2005.
Then click Tools\Add-in Manager.
Then check the checkbox against "NantAddin".

Now you will be able to run nant scripts or part of it directly from within VS2005.

kick it on DotNetKicks.com

Turning an ascx user control into a redistributable custom control

This is possible using the new binary compilation feature of ASP.NET 2.0 and also the availability of the ClassName property on the user control.
The ClassName property allows to specify the Namespace that is needed when consuming the usercontrol from another application.

We can publish multiple controls within a single assembly but we have to insure that the Web project contains just UserControls only (ascx along with the associated code behind files).
But if you use the "use fixed naming and single page assemblies" option then a separate assembly for each user control will be generated. However the advantage of using this option is that if you were to make changes and deploy a newer version of the user control, then you need not modify the consuming application.

This is since the assembly NAME will remain the same on each compilation of the user control project and thus we do not need to update the reference to the user control assembly from within the consuming application.

There is one more option if you decide to use the "use fixed naming and single page assemblies".
After separate assemblies are created using the Publish Website option (or using the aspnet_compiler.exe tool) we can then merge these assemblies into a single assembly using the aspnet_merge.exe tool. Another option to combine multiple user controls into a single assembly is by using the Web deployment projects (WDP).

Read this excellent MSDN magazine article by Fritz Onion on Web Deployment projects (WDP) and for info about packaging user controls into assembly for multi application use:
http://msdn.microsoft.com/msdnmag/issues/07/04/ExtremeASPNET/

So the steps for converting a user control into a redistributable custom control are:
1. Create the User control in a Website project in VS2005.
The main restriction is that the User Control needs to be self contained. That is, it cannot be dependent on app global things like App_Code or global.asax. If you do need to use the App_Code folder then you could use WDP or aspnet_merge.exe to combine mutiple assemblies into a single assembly.

When you create the User Control (say MyTestUC.ascx), VS creates it with the a @control directive that looks like this:
<%@ Control Language="C#" ClassName="MyTestUC" %>

This is fine, except for one thing: we want the class to live within a namespace of our choice (instead of 'ASP', which is what's used by default). To do this, we simply modify the ClassName attribute to include the namespace (this is a new feature in 2.0). e.g.

<%@ Control Language="C#" ClassName="Acme.MyTestUC" %>

2. Publish the website to compile the user control into a assembly.
Note you need to "uncheck" 'Allow this precompiled site to be updatable' option within "Publish Website", or you can use aspnet_compiler along with -u switch to create a binary deployment of the user controls.

3. Add reference to the generated assembly in step 2 from within the consuming web application.
First, add a Register directive to your page. It should look something like this:

<%@ Register TagPrefix="Acme" Namespace="Acme" Assembly="App_Web_mytestuc.ascx.cdcab7d2" %>
Note here that the assembly name is "App_web_mytestuc.ascx.cdcab7d2"
This assembly was generated using the "use fixed naming and single page assemblies"

the reference the user control within the page of the consuming application like this:
<Acme:MyTestUC id="MyUC" runat="server" />

If you had multiple user controls within that single assembly you could then reference the second control from that assembly likewise:
<Acme:MyTestUC2 id="MyUC2" runat="server" />

That is all that is there to it.

kick it on DotNetKicks.com

Apr 29, 2007

File Compress and File Upload

/// Upload a file......
string strName;
strName = System.IO.Path.GetFileName(fileUpload.PostedFile.FileName);

try {
fileUpload.PostedFile.SaveAs(Server.MapPath(strName));
this.lblResult.Text = strName + " was uploaded successfully.";
}
catch (Exception ex)
{ this.lblResult.Text = "An Error Occured While Uploading File.";}

============================================

/// Compress and Upload the file ...
string strName = Path.GetFileName(fileUpload.PostedFile.FileName);

//Create a stream object to read the file.
Stream myStream = fileUpload.PostedFile.InputStream;
byte[] myBuffer = new byte[myStream.length];

//Read the file using the Stream object and fill the buffer.
myStream.Read(myBuffer, 0, myBuffer.Length);
myStream.Close();
FileStream compresFile;
compresFile = File.Create(Server.MapPath(Path.ChangeExtension(strName, "gz")));

// Write from the buffer containing the file contents to the gzip file using the GZipStream object
GZipStream myStreamZip = new GZipStream(compresFile, CompressionMode.Compress);
myStreamZip.Write(myBuffer, 0, myBuffer.Length);
myStreamZip.Close();

kick it on DotNetKicks.com