Self Closing MVC Html Helpers

The following is an example of how to write a self closing MVC Html helper similar to BeginForm(). It takes the form of a self closing DIV tag; in fact we will write a Bootstrap panel.

We will be profiting from the IDisposable and the Dispose method which will write our closing div tag

First we create a class which will be passed an Action which will write our end tag; this class implements IDisposable and calls out required action.

using System;

namespace WebApps.HtmlHelpers
{
  internal class DisposableHtmlHelper : IDisposable
  {
    private readonly Action _end;

    public DisposableHtmlHelper(Action end)
    {
      _end = end;
    }

    public void Dispose()
    {
      _end();
    }
  }
}

Now we write our help methods; a BeingPanel method which writes a div tag to the ViewContect response stream. It returns an instance of our newly created DisposableHtmlHelper as defined above. We register with it our method which will write out closing tag.

using System;
using System.Web.Mvc;

namespace WebApps.HtmlHelpers
{
  public static class HtmlHelpers
  {
    public static IDisposable BeginPanel(this HtmlHelper htmlHelper)
    {
      htmlHelper.ViewContext.Writer.Write(@"<div class="">");

      return new DisposableHtmlHelper(htmlHelper.EndPanel);
    }

    public static void EndDiv(this HtmlHelper htmlHelper)
    {
      htmlHelper.ViewContext.Writer.Write("</div>");
    }

    public static void EndPanel(this HtmlHelper htmlHelper)
    {
      htmlHelper.EndDiv();
    }
  }
}

We can now call this within a using statement in our view.

@using (Html.BeginPanel()) {
 // Monkey business would be here
}
Advertisements

ADO.NET – Connected Layer

Intro

The connected layer provides functionality for running SQL syntax against a connected database. The statements can be select, insert, update or delete as well as execute schema statements and the ability to run stored procedures and functions.

The connected layer requires the database connection to remain open while the database transactions are being performed.

Source Code

All source code can be found on GitHub here.

This is part of my HowTo in .NET seriies. An overview can be seen here,

Command Objects

The Command object represents a SQL statement to be run; select, insert, update, delete, execute schema, execute stored procedure etc.

The DbCommand is configured to execute an sql statement or stored procedure via the CommandText property.

The CommandType defines the type of sql statement to be executed which is stored in the CommandText property.

CommandType

Description

StoredProcedure

CommandText contains the name of a StoredProcedure or UserFunction.

TableDirect

CommandText contains a table name. All rows and columns will be returned from the table.

Text

CommandText defines an SQL statement to be executed.

The CommandType will default to Text however it is good practice to explicitly set it.

var connectionDetails =
ConfigurationManager.ConnectionStrings ["MyDatabase"];

var providerName = connectionDetails.ProviderName;
var connectionString = connectionDetails.ConnectionString;

var dbFactory = DbProviderFactories.GetFactory (providerName);

using(var cn = dbFactory.CreateConnection())
{
cn.ConnectionString = connectionString;
cn.Open();

var cmd = cn.CreateCommand();

cmd.CommandText = "Select * from MyTable";
cmd.CommandType = CommandType.Text;
cmd.Connection = cn;
}

The command should be passed the connection via the Connection property.

The command object will not touch the database until the either the ExecuteReader, ExecuteNonQuery or equivalent has been called upon.

DataReader

The DataReader class allows a readonly iterative view of the data returned from a configured command object.

A DataReader is instantiated by calling ExecuteReader upon the command object.

The DataReader exposes a forward only iterator via the Read method which returns false when the collection has been completely iterated.

The DataReader represents both the collection of records and also the individual records; access to the data is made by calling methods upon the DataReader.

DataReader implements IDisposable and should be used within a using scope.

Data can be accessed by field name and also ordinal position via the index method[] which returns the data cast into object. Alternatively methods exist to get the data for any .NET primitive type.

using (var dr = cmd.ExecuteReader ()) {
while (dr.Read ()) {
var val1 = (string)dr ["FieldName"];
var val2 = (int)dr [0];

// Get the field name or its ordinal position
var name = dr.GetName (1);
var pos = dr.GetOrdinal ("FieldName");

// Strongly Types Data
var val3 = dr.GetInt32 (1);
var val4 = dr.GetDecimal (2);
var val5 = dr.GetDouble (3);
var val6 = dr.GetString (4);

var isNull = dr.IsDBNull (5);

var fdType = dr.GetProviderSpecificFieldType (6);
}
}

The IsDBNull method can be used to determine if a value contains a null. A .NET type representation of the contained field can be retrieved with the GetProviderSpecificFieldType method.

Multiple Results

If the select statement of a command has multiple results the DataReader.NextResult() method exposes a forward only iterator through each result group:

string strSQL = "Select * From MyTable;Select * from MyOtherTable";

NextResult increments to the next result group, just like Read it returns false once the collection has been exhausted:

using (var dr = cmd.ExecuteReader ()) {
while (dr.NextResult ()) {
while (dr.Read ()) {
}
}
}

DataSet

DataReader can be used to fill a DataSet. We talk more about DataSet within the disconnected layer in the next chapter.

var dtable = new DataTable();

using(var dr = cmd.ExecuteReader())
{
dtable.Load(dr);
}

ExecuteNonQuery

ExecuteNonQuery allows execution of an insert, update or delete statement as well as executing a schema statement.

The method returns an integral representing the number of affected rows.

using (var cn = dbFactory.CreateConnection ()) {
cn.ConnectionString = connectionString;
cn.Open ();

var cmd = cn.CreateCommand ();

cmd.CommandText = @"Insert Into MyTable (FieldA) Values ('Hello’)";
cmd.CommandType = CommandType.Text;
cmd.Connection = cn;

var count = cmd.ExecuteNonQuery ();
}

Command Parameters

Command parameters allow configuring stored procedure parameters as well as parameterized sql statements which can help protect against SQL injection attacks.

It is strongly advised that any information collected from a user should be sent to the database as parameter regardless if it is to be persisted or used within a predicate.

Parameters can be added with the Command.Parameters.AddWithValue or by instantiating a DbParameter class.

The example below shows the former.

using (var cn = dbFactory.CreateConnection ()) {
cn.ConnectionString = connectionString;
cn.Open ();

var cmd = cn.CreateCommand ();

cmd.CommandText = @"Insert Into MyTable (FieldA) Values (@Hello)";
cmd.CommandType = CommandType.Text;

var param = cmd.CreateParameter ();

param.ParameterName = "@Hello";
param.DbType = DbType.String;
param.Value = "Value";
param.Direction = ParameterDirection.Input;
cmd.Parameters.Add (param);

cmd.Connection = cn;

var count = cmd.ExecuteNonQuery ();
}

The DbCommand has a DbType property which allows setting the type of the parameter, it is vendor agnostic.

SqlCommand and MySqlCommand also provide SqlDbType and MySqlDbType which can be used to set the type field from a vendor specific enum. Setting the DbType will maintain the vendor specific column and vice versa.

Executing a Stored Procedure

A stored procedure is executed by configuring a DbCommand against the name of the stored procedure along with any required parameters followed by calling ExecuteScalar or ExecuteReader.

ExecuteScalar is used to return a single value. If multiple result sets, rows and columns are returned it will return the first column from the first row in the first result set.

Parameters can either be input or output.

using(var cn = dbFactory.CreateConnection())</span></pre>
{
cn.ConnectionString = connectionString;
cn.Open();

var cmd = cn.CreateCommand();
cmd.CommandText = "spGetFoo";
cmd.Connection = cn;

cmd.CommandType = CommandType.StoredProcedure;

// Input param.
var paramOne = cmd.CreateParameter();
paramOne.ParameterName = "@inParam";
paramOne.DbType = DbType.Int32;
paramOne.Value = 1;
paramOne.Direction = ParameterDirection.Input;
cmd.Parameters.Add(paramOne);

// Output param.
var paramTwo = cmd.CreateParameter();
paramTwo.ParameterName = "@outParam";
paramTwo.DbType = DbType.String;
paramTwo.Size = 10;
paramTwo.Direction = ParameterDirection.Output;
cmd.Parameters.Add(paramTwo);

// Execute the stored proc.
var count = cmd.ExecuteScalar();

// Return output param.
var outParam = (int)cmd.Parameters["@outParam"].Value;

// This can be made on the parameter directly
var outParam2 = (int)paramTwo.Value;
}

 

Member name

Description

Input

The parameter is an input parameter (default).

InputOutput

The parameter is capable of both input and output.

Output

The parameter is an output parameter and has be suffixed with the out keyword in the parameter list of a stored procedure, built in function or user defined function.

ReturnValue

The parameter is a return value, scalar or similar but not a return set. This is determined by the return keyword in a stored procedure, built in function or user defined function.

If the stored procedure returns a set and not a single value the ExecuteReader can be used to iterate over the result:

using (var cn = dbFactory.CreateConnection ()) {
cn.ConnectionString = connectionString;
cn.Open ();

var cmd = cn.CreateCommand ();
cmd.CommandText = "spGetFoo";
cmd.Connection = cn;

cmd.CommandType = CommandType.StoredProcedure;

using (var dr = cmd.ExecuteReader ()) {
while (dr.NextResult ()) {
while (dr.Read ()) {
}
}
}
}

Database Transactions

When the writable transaction of a database involves more than one writable actions, it is essential that all the actions are wrapped up in a unit of work called a database transaction.

ACID

The desired characteristics of a transaction are defined by the term ACID:

Member name

Description

Atomicity

All changes within a unit of work complete or none complete; they are atomic.

Consistency

The state of the data is consistent and valid. If upon completing all of the changes the data is considered invalid, all the changes are undone to return the data to the original state.

Isolation

All changes within a unit of work occur in isolation from other readable and writable transactions.

No one can see any changes until all changes have been completed in full.

Durability

Once all the changes within a unit of work have completed, all the changes will be persisted.

No one can see any changes until all changes have been completed in full.
Durability
Once all the changes within a unit of work have completed, all the changes will be persisted.

Syntax

A transaction is started through the connection object and attached to any command which should be run in the same transaction.

Commit should be called upon the transaction upon a successful completion while Rollback should be called if an error occured. This can be achieved by a try catch statement:

using (var cn = dbFactory.CreateConnection ()) {
cn.ConnectionString = connectionString;
cn.Open ();

var cmdOne = cn.CreateCommand ();
cmdOne.CommandText = "spUpdateFoo";
cmdOne.Connection = cn;
cmdOne.CommandType = CommandType.StoredProcedure;

var cmdTwo = cn.CreateCommand ();
cmdTwo.CommandText = "spUpdateMoo";
cmdTwo.Connection = cn;
cmdTwo.CommandType = CommandType.StoredProcedure;

var tran = cn.BeginTransaction ();

try {

cmdOne.Transaction = tran;
cmdTwo.Transaction = tran;

cmdOne.ExecuteNonQuery ();
cmdTwo.ExecuteNonQuery ();

tran.Commit ();
} catch (Exception ex) {
tran.Rollback ();
}
}

The DbTransaction class implements IDisposable and can therefore be used within a using statement rather than a try catch. The Dispose method will be called implicitly upon leaving the scope of the using statement; this will cause any uncommitted changes to be rolled back while allowing any committed changes to remain as persisted. This is the preferred syntax for writing transactions in ADO.NET:

var connectionDetails =
ConfigurationManager.ConnectionStrings ["MyDatabase"];

var providerName = connectionDetails.ProviderName;
var connectionString = connectionDetails.ConnectionString;

var dbFactory = DbProviderFactories.GetFactory (providerName);

using (var cn = dbFactory.CreateConnection ()) {
cn.ConnectionString = connectionString;
cn.Open ();

var cmdOne = cn.CreateCommand ();
cmdOne.CommandText = "spUpdateFoo";
cmdOne.Connection = cn;
cmdOne.CommandType = CommandType.StoredProcedure;

var cmdTwo = cn.CreateCommand ();
cmdTwo.CommandText = "spUpdateMoo";
cmdTwo.Connection = cn;
cmdTwo.CommandType = CommandType.StoredProcedure;

using (var tran = cn.BeginTransaction ()) {

cmdOne.Transaction = tran;
cmdTwo.Transaction = tran;

cmdOne.ExecuteNonQuery ();
cmdTwo.ExecuteNonQuery ();

tran.Commit ();
{
}
}
}

Concurrency Issues

Writing and reading to a database does suffer from concurrency issues; multiple transactions occuring at the same time.

Concurrency Issue

Description

Lost Update

Two or more transactions perform write actions on the same record at the same time without being aware of each others change to the data. The last transaction will persist its state of the data overwriting any changes to the same fields made by the first.

Dirty Read

Data is read while a transaction has started but not finished writing. The data is considered dirty as it represents a state of the data which should not have existed.

Nonrepeatable Read

A transaction reads a record multiple times and is presented with multiple versions of the same record due to another transaction writing to the same record.

Phantom Read

Data from a table is read while inserts or deletes are being made. The result set contains missing data from the inserts which have not finished as well as records which are no longer found within table due to the deletes.

Isolation Level

Isolation levels define rules for accesing data when another transaction is running.

The level is set when creating a transaction with the BeginTransaction method and can be read with the IsolationLevel property:

using (var tran = cn.BeginTransaction(IsolationLevel.ReadCommitted)) {

cmdOne.Transaction = tran;
cmdTwo.Transaction = tran;
IsolationLevel isolationLevel = tran.IsolationLevel;

cmdOne.ExecuteNonQuery ();
cmdTwo.ExecuteNonQuery ();

tran.Commit ();
}

The IsolationLevel enum has levels considered lowest as unspecified and highest as Serializable as we traverse through the table; opposite to the amount of concucrency allowed. Chaso and snapshop are considered outside of this low to high range.

 

Isolation Level

Description (MSDN)

Unspecified

The isolation level is undetermined.

A different isolation level than the one specified is being used, but the level cannot be determined.

Often found in older technologies which work differently from current standards such as OdbcTransaction,

ReadUncommitted

A transaction reading data places no shared or exclusive locks upon the data being read.

Other transactions are free to insert, delete and edit the data being read.

This level of isolation allows a high level of concurrency between transactions which is good for performance.

This level of isolation suffers from the possibility of dirty, nonrepeatable and phantom reads.

ReadCommitted

A transaction reading data places a shared lock on the data being read.

Other transactions are free to insert or delete rows but are not given permission to edit the data being read.

This level of isolation does not suffer from dirty reads.

This level of isolation suffers from non repeatable and phantom reads.

This is the default isolation in most databases.

RepeatableRead

A transaction reading data places an exclusive lock on the data being read.

Other transactions are free to insert data into the table but are not free to edit or delete the data being read.

This level of isolation does not suffer from dirty and nonrepeatable reads.

This level of isolation suffers from phantom reads.

Serializable

A transaction reading data places an exclusive lock on the data being read.

Other transactions are not free to insert, update or delete the data being read.

This level of isolation does not suffer from dirty, nonrepeatable or phantom reads.

This level of isolation suffers from the least amount of concurrency and is bad on performance.

Chaos

Allows dirty, lost updates, phantom and nonrepeatable reads but not concurrently with other transactions with a higher isolation level than themselves.

Snapshot

Every writable transaction causes a virtual state of the data before the transaction is made. No exclusive lock is made on the data though isolation is guaranteed by allowing other  transactions to perform read actions on the data at the state before the writable transaction started,

Checkpoints

Checkpoints provide a way of defining temporary save points; data can be rolled back to these intermediatory points.

Save does not persist the changes to the database and a Commit would still be required upon completion of all writable transactions.

using (var tran = cn.BeginTransaction (IsolationLevel.ReadCommitted)) {

cmdOne.Transaction = tran;
cmdTwo.Transaction = tran;

cmdOne.ExecuteNonQuery ();

tran.Save ("Charlie");

cmdTwo.ExecuteNonQuery ();

tran.Rollback ("Charlie");
tran.Commit ();
}

Not all database vendors provide checkpoints. SQL Server does. MySQL does though the MySql Connector ADO.NET data provider does not currently appear to support this feature.

Nested Transactions

If the database vendor / ADO.NET Data Provider does not support checkpoints, nested transactions can always be implemented to achieve the same result. Here a new transaction can be created within the scope of another transaction.

using (var tranOuter = cn.BeginTransaction ()) {

cmdOne.Transaction = tranOuter;
cmdOne.ExecuteNonQuery ();

using (var tranInner = cn.BeginTransaction ()) {
cmdTwo.Transaction = tranInner;

cmdTwo.ExecuteNonQuery ();
tranInner.Rollback ();
}

tranOuter.Commit ();
}

ADO.NET – Connections And Data Providers

Intro

ADO.NET is a framework for accessing, interrogating, manipulating and persisting data in relational databases.

Source Code

All source code can be found on GitHub here.

This is part of my HowTo in .NET seriies. An overview can be seen here,

Data Providers

The framework abstracts consuming code away from the specifics of each vendor’s database implementation allowing code to be written which is virtually agnostic to the data source.

Each ADO.NET capable database provides an ADO.NET data provider which handles the databases specific implementation needs.

Through a series of classes, abstract classes and interfaces, ADO.NET provides functionality which is consistant regardless of the database vendor.

ADO.NET data providers exist for most database vendors. A list can be found here: http://msdn.microsoft.com/en-gb/data/dd363565.aspx

Configuring A Database Provider

Depending upon your choice of database and data provider, you might need to configure the provider to be used with .NET.

SQL Server is configured automatically on Windows/.NET. Under Linux/Mono SQLIte is configured automatically yet MySQL is not.

If you are using MySQL I would recommend the Connector/Net ADO.NET data provider. You can download the driver here: http://dev.mysql.com/downloads/connector/net/

The data provider is written in .NET and is provided as an assembly which needs to be installed into the GAC.

sudo gacutil -i MySql.Data.dll

To check the installation went ok. You can list the contents of the gac with the gacutil command. The command below uses the POSIX command grep to filter the results:

gacutil -l | grep My

Data providers are registered within the machine.config of the .NET version you are running.

For windows the machine.config location will look something like:

C:\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG\machine.config

For Mono and Debian based Linux the machine.config location will look something like:

/usr/local/etc/mono/2.0/machine.config

The configuration will be specific to the version of your data provider. You should check the installation page of you vender to determine this. The following is for MySQL Connector provider version 6.3.5.0:

<add name=”MySQL Data Provider”
invariant=”MySql.Data.MySqlClient”
description=”.Net Framework Data Provider for MySQL”
type=”MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data,
Version=6.3.5.0, Culture=neutral,
PublicKeyToken=c5687fc88969c44d” />

This should be copied into the Machine.Config within the <DbProviderFactories> node which sits within the <System.Data> node. You should find other providers already configured.

The data provider can then be referenced as any other assembly within the GAC.

Bases Classes

ADO.NET provides a series of core and abstract classes; they provide common functionlaity that is not specific to any data priovider.

ADO.NET also provides a series of interfaces. ADO.NET providers implement these interfaces.

The data providers are welcome to inherit from the abstract classes, and most do, to reuse the common functionality associated with them.

ADO.NET API consumes the interfaces rather than the abstract classes to allow a data provider with greater control of their required funcitonality.

Below we describe the core abstract classes and their interfaces.

Class / Name Space

Description

System.Data

Provides a common namespace for most of the abstract classes and interfaces which each data provider inherit from or implement.

It also contains common entities which are not specific to a data provider; datasets, tables, rows, columns, relational constraints etc.

DbConnection

IDbConnection

Supports for configuring connections

DbTransaction

IDbTransaction

Support for database transactions

DbCommand

IDbCommand

Support for calling SQL statements, stored procedures and parameterized queries.

Provides access to the an instance of the  DataReader class through the ExecuteReader() method

DbParameterCollection

IDataParameterCollection

Provides a collection of DbParameters to IDBCommand.

DbParameter

IDbDataParameter

IDataParameter

Provides a parameter to sql statements and stored procedures. It is used by IDBCommand.

DbDataReader

IDataReader

Provides a read only iterative view of the data returned from a SQL statement.

Provides access to strongly typed data via field names or their ordinal positions.

DbDataAdapter

IDbDataAdapter

IDataAdapter

Provides access to a cached subset of data, monitoring of any changes which can then be persisted back into the database at a later time.

Below we map the abstract classes and interfaces to the SQL Server and MySQL data provider classes. As you can see the naming convention is consistent:

Class / Name Space

SQL Server

MySQL

System.Data

System.Data.SqlClient

MySql.Data.MySqlClient

DbConnection

SqlConnection

MySqlConnection

DbTransaction

SqlTransaction

MySqlTransaction

DbCommand

SqlCommand

MySqlCommand

DbParameterCollection

SqlParameterCollection

MySqlParameterCollection

DbParameter

IDbDataParameter

IDataParameter

SqlParameter

MySqlParameter

DbDataReader

IDataReader

SqlDataReader

MySqlDataReader

DbDataAdapter

IDbDataAdapter

IDataAdapter

SqlDataAdapter

MySqlDataAdapter

Though it is possible to code directly to the MySQL or SQL Server data provider classes, it is not recommended and considered bad practice.

All the examples will be make use of the DbProviderFactories class and the associated DbProviderFactory class to create instance of the required data provider specific classes. The DbProviderFactories uses late binding to to determine which class the data provider has provided as part of their implementation.

Database Connections

Connection Strings

ADO.NET allows a virtually database agnostic approach to database access within .NET.

The connection strings are vendor specific and can turn off and on a number of features each of which might be specific to each vendor.

In short the minimum requirements is the server name, the database name and some form of login criteria. As each vendors connection string is specific to the vendor and data provider. Here are some examples:

Database
Connection String
Additional
SQL Server
Data Source=localhost;Integrated Security=SSPI;Initial Catalog=MyDB

SSPI means the logged in user will be used to connect with windows integrated security.

Alternative you could provide the username and password
MySQL
Server=localhost;Database=MyDB;Uid=myUsername;Pwd=myPassword;
MySQL data adapter does not appear to support SSI

It is advised to use connection pooling when concurrent database access is required. You should check each vendor for how to set this up.

http://www.connectionstrings.com provides an excellent knowledge base for connecting to most databases in most technologies including ADO.NET.

Machine.Config

Though connection strings can be defined anywhere, it is good practice to place them within the machine.config. This can allow changing connection criteria including the database server without compiling code; great for having separate databases for development, testing and production.

Connection Strings should be placed between the appSettings and the connectionStrings node.

The connection string can be named with the name field for reference in code later.

The providers assembly is referenced with the providerName field.

SQL Server:

<connectionStrings>
<add name =”MyDatabase”
providerName=”System.Data.SqlClient”
connectionString = “Data Source=localhost; Integrated
Security=SSPI;Initial Catalog=MyDB”/>
</connectionStrings>

MySQL:

<connectionStrings>
<add name=”MyDatabase”
providerName=”MySql.Data.MySQLClient”
connectionString=”Server=localhost;Database=MyDB;User
ID=Me;Password=MyPassword” />
</connectionStrings>

ConfigurationManager

The ConfigurationManager provides access to the defined connections within the Machine.Config (and App.Config) files via the name provided. Above we defined a connection called MyDatabase:

var connectionDetails =
ConfigurationManager.ConnectionStrings["MyDatabase"];

var providerName = connectionDetails.ProviderName;
var connectionString = connectionDetails.ConnectionString;

Connecting To a Database

To connect to a database an instance of DbConnection should be initialised against the database vendor specific connection string.

The The DbProviderFactories can be used to create the database connection object. It takes the database provider name which we configured as part of the connection information.

DbProviderFactory dbFactory =
DbProviderFactories.GetFactory(providerName);

using (DbConnection connection = dbFactory.CreateConnection())
{
connection.ConnectionString = connectionString;
connection.Open();
}

DbConnection implements IDisposable; it should be contained within a using scope.

ConnectionStringBuilder

The ConnectionStringBuilder class can be used in conjunction with a connection string to help customise a connection string in code. The index method [] can be used to set key value pairs representing connection properties.

It can be initialised with a connection string:

var connectionDetails =
ConfigurationManager.ConnectionStrings ["MyDatabase"];

var providerName = connectionDetails.ProviderName;
var connectionString = connectionDetails.ConnectionString;

var dbFactory = DbProviderFactories.GetFactory (providerName);

var builder = dbFactory.CreateConnectionStringBuilder ();
builder.ConnectionString = connectionString;
builder ["ConnectionTimeout"] = 60;

using (var connection = dbFactory.CreateConnection ()) {
connection.ConnectionString = builder.ConnectionString;
connection.Open ();
}

Connected vs Disconnected Layers

ADO.NET provides two conceptual layers; connected and disconnected.

The connected layer requires a live connection to the database to remain open during interaction. It is used for short transactions to the database such as calling stored procedures, running schema update or writeable transactions. The layer can be used for reading data though as the connection remains open; it is advised to limit the usage for short use.

The disconnected layer allows populating a DataSet object which allows an offline representation of the data. The connection remains open only for the amount of time it takes to populate the required data into the DataSet. The dataset can then be manipulated independently without the connection being open. The data can even be persisted back to the database at a later date.

Rhino Mocks Cheat Sheet

A cheat sheet for Rhino Mocks v 3.6 in C#

All source code can be found on GitHub here.

This post is part of my cheat sheet series.

You can see my HowTo here.

// *** STUB ****
// A stub is an object which is required to pass the SUT.

mocks.CreateStub<T>();

// The simple stub
var sut = MockRepository.GenerateStub<ISimpleModel>();
sut.Stub(x => x.Do()).Return(1);

// Stub Property
sut.Stub(x => x.AReadonlyPropery).Return(1); // Reaonly ( only get )
sut.AProperty = 2; // Stub Get

// Repeat
mock.Stub(x => x.Do()).Return(1).Repeat.Once();
mock.Stub(x => x.Do()).Return(2).Repeat.Twice();
mock.Stub(x => x.Do()).Return(3).Repeat.Times(3);

// Events
mock.Raise(x => x.Load += null, this, EventArgs.Empty);

// **** Arguments Conditional ****
// These can be used for Stub, Expect, AssertWasCalled and AssertWasNotCalled

// Ignore Arguments Conditional
sut.Stub(x => x.Do(Arg<int>.Is.Equal(1))).IgnoreArguments().Return(1);

// Is Conditionals
sut.Stub(x => x.Do(Arg<int>.Is.Anything)).Return(1);
sut.Stub(x => x.Do(Arg<int>.Is.Equal(1))).Return(1);
sut.Stub(x => x.Do(Arg<int>.Is.NotEqual(1))).Return(10);
sut.Stub(x => x.DoIFoo(Arg<Foo>.Is.Null)).Return(1);
sut.Stub(x => x.DoIFoo(Arg<Foo>.Is.NotNull)).Return(2);
sut.Stub(x => x.Do(Arg<int>.Is.LessThanOrEqual(10))).Return(1);
sut.Stub(x => x.Do(Arg<int>.Is.GreaterThan(10))).Return(2);
sut.Stub(x => x.DoIFoo(Arg<Foo>.Is.Same(foo))).Return(1);
sut.Stub(x => x.DoIFoo(Arg<Foo>.Is.NotSame(foo))).Return(2);
sut.Stub(x => x.DoIFoo(Arg<Foo>.Is.TypeOf)).Return(1);

// Matches Conditional
sut.Stub(x => x.Do(Arg<int>.Matches(y => y > 5))).Return(1);

// List Conditionals
sut.Stub(x => x.Do(Arg<List<int>>.List.Count(RIS.Equal(0)))).Return(0);
sut.Stub(x => x.Do(Arg<List<int>>.List.Element(0, RIS.Equal(1)))).Return(1);
sut.Stub(x => x.Do(Arg<List<int>>.List.Equal(new int[] { 4, 5, 6 }))).Return(2);
sut.Stub(x => x.Do(Arg<List<int>>.List.IsIn(1))).Return(1);
sut.Stub(x => x.Do(Arg<int>.List.OneOf(new int[] { 4, 5, 6 }))).Return(2);

// ByRef and Out parameters
sut.Stub(x => x.Do(Arg<int>.Is.Equal(1), ref Arg<int>.Ref(RIS.Equal(0), 10).Dummy)).Return(1);
sut.Stub(x => x.Do(Arg<int>.Is.Equal(1), Arg<string>.Is.Equal("Hello"), out Arg<int>.Out(10).Dummy)).Return(1);

// **** DYNAMIC MOCKS ***
// A mock is an object which we can set expectations on and will assert that those expectations have been
// Dynamic Mock provides easier syntax and does not require stubbing/expecting all methods

mocks.CreateMock<T>();

// Assert Was Called
mock.AssertWasCalled(p => p.Add(Arg<AnotherModel>.Is.Anything));
mock.AssertWasCalled(p => p.Add(Arg<AnotherModel>.Is.NotNull));
mock.AssertWasCalled(p => p.Add(Arg<AnotherModel>.Is.Equal(theModel)));
mock.AssertWasNotCalled(p => p.Add(Arg<AnotherModel>.Is.Null));
mock.AssertWasCalled(x => x.AReadonlyPropery);
mock.AssertWasCalled(x => x.AProperty);
mock.AssertWasCalled(x => x.AProperty = 9);
mock.AssertWasCalled(x => x.EventHandler += Arg<AnEvent>.Is.Anything); // Event was registered

// Expect & Verify
mock.Expect(p => p.Add(Arg<AnotherModel>.Is.Anything));
mock.Expect(p => p.Add(Arg<AnotherModel>.Is.NotNull));
mock.Expect(p => p.Add(Arg<AnotherModel>.Is.Equal(theModel)));
mock.Expect(x => x.AReadonlyPropery).Return(9);
mock.Expect(x => x.AProperty).Return(9);
mock.Expect(x => x.AProperty).SetPropertyAndIgnoreArgument();
mock.Expect(x => x.AProperty).SetPropertyWithArgument(11);

mock.VerifyAllExpectations()