Encrypting files in C#.NET using the Advanced Encryption Standard (AES)


Cryptography

One of the biggest challenges when dealing with the security and encryption for a system, is the determination of the correct ciphering paradigm. In .NET, there is a copious amount of libraries available for use in the System.Cryptography namespace. A significant amount of these libraries have been deprecated, usually, due to vulnerabilities being subsequently exposed, so it is very easy to use something that may be as watertight as a sieve.

This is further compounded by the fact that the cryptography API’s are very detailed and low level – they are not easy to use for a novice – the consequences of setting a single parameter incorrectly results in a security implementation that may as well not exist. Consequently, it is imperative that this subject never be approached in a typical agile/sprint manner – security should definitely be approached using a waterfall model. Have no hesitation to advise any manager or architect that your solution “will be ready, when it is ready”. The agile methodology is typically about adding units of functionality in a YAGNI way, accruing technical debt that can be paid back later, and refactoring applied, this just simply not a correct or acceptable approach when dealing with the security of a system. Do ensure you take the time to do a lot of research, understanding the pitfalls of various implementations is vital to a robust security implementation.

The Advanced Encryption Standard (AES)

The abundance of so many different types of cryptography, implemented using Symmetric (same key is used to encrypt and decrypt) and Asymmetric (public key and private key used to encrypt and decrypt) algorithms has necessitated that Governments try and standardise implementations across departments, sites and even countries. The AES was released in 2001 as a replacement for the Data Encryption Standard (DES) which had been found to be susceptible to backdoors. This new standard has been widely adopted in commercial environments, as it had a requirement to be able to protect information for a minimum of 20 years or 30 years.

A number of papers were submitted in the application process for the AES by various academic institutions, with the winning cipher named Rijndael (pronounced rain-dahl) a play on the names of the authors of the paper, Joan Daemen and Vincent Rijmen (paper available here). I am sure you will agree that comprehension and implementation of the paper is better suited to domain experts. The algorithm was written by two gifted PhD calibre researchers, so your time as a developer is better suited to try and resolve the domain problems that your business is trying to solve (unless you are a cryptographer of course). You can be sure that researchers at Microsoft have done all the time consuming work of implementing and testing the algorithm, rather than to trying to implement the Rijndael Block Cipher yourself.

To this end, Microsoft have implemented the Rijndael Block Cipher in two in .NET  classes which, incidentally, both inherit from the SymmetricAlgorithm  abstract base class

  • RijndaelManaged.
  • AesManaged
    The AES algorithm essentially, is the Rijndael symmetric algorithm with a fixed block size and iteration count. This class functions the same way as the RijndaelManaged class but limits blocks to 128 bits and does not allow feedback modes. Most developers tend to favour using the RijndaelManaged class directly, as that is the one that is used in the FIPS-197 specification for AES but there are a couple of caveats. If you want to use RijndaelManaged  as AES and adhere to the specification ensure
  1. You set the block size to 128 bits 
  2. You do not use CFB mode, if you do, ensure the feedback size is also 128 bits 

Unlike some of the asymmetric implementations by Microsoft, the AES implementation allows you to work at a very high level of abstraction, reducing the amount of parameters you have to configure, hence the scope for error. I have created a class that allows you to encrypt and decrypt strings (your password), and then use this to encrypt a files from anywhere on your machine.

Thus far, the only way this algorithm can be broken is by using a technique known as brute force. This is done by a supercomputer(s) trying every known word in a language, and various password to try and generate the correct password. Typically, these types of programs run over weeks or even months, but can be increased  to millennia if the end user chooses a strong password to begin with, which is why having a well defined password policy is vital.

public MainWindow()
{
InitializeComponent();
 
byte[] encryptedPassword;
 
// Create a new instance of the RijndaelManaged
// class.  This generates a new key and initialization
// vector (IV).
using (var algorithm = new RijndaelManaged())
{
algorithm.KeySize = 256;
algorithm.BlockSize = 128;
 
// Encrypt the string to an array of bytes.
encryptedPassword = Cryptology.EncryptStringToBytes("Password", algorithm.Key, algorithm.IV);
}
 
string chars = encryptedPassword.Aggregate(string.Empty, (current, b) => current + b.ToString());
 
Cryptology.EncryptFile(@"C:\Users\Ira\Downloads\test.txt", @"C:\Users\Ira\Downloads\encrypted_test.txt", chars);
 
Cryptology.DecryptFile(@"C:\Users\Ira\Downloads\encrypted_test.txt", @"C:\Users\Ira\Downloads\unencyrpted_test.txt", chars);
}
 

I am using 256 bit (you can change this to 128 or 192)

using System;
using System.IO;
using System.Security.Cryptography;
using System.Text;
 
namespace AesApp.Rijndael
{
internal sealed class Cryptology
{
private const string Salt = "d5fg4df5sg4ds5fg45sdfg4";
private const int SizeOfBuffer = 1024*8;
 
internal static byte[] EncryptStringToBytes(string plainText, byte[] key, byte[] iv)
{
// Check arguments.
if (plainText == null || plainText.Length <= 0)
{
throw new ArgumentNullException("plainText");
}
if (key == null || key.Length <= 0)
{
throw new ArgumentNullException("key");
}
if (iv == null || iv.Length <= 0)
{
throw new ArgumentNullException("key");
}
 
byte[] encrypted;
// Create an RijndaelManaged object
// with the specified key and IV.
using (var rijAlg = new RijndaelManaged())
{
rijAlg.Key = key;
rijAlg.IV = iv;
 
// Create a decrytor to perform the stream transform.
ICryptoTransform encryptor = rijAlg.CreateEncryptor(rijAlg.Key, rijAlg.IV);
 
// Create the streams used for encryption.
using (var msEncrypt = new MemoryStream())
{
using (var csEncrypt = new CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write))
{
using (var swEncrypt = new StreamWriter(csEncrypt))
{
//Write all data to the stream.
swEncrypt.Write(plainText);
}
encrypted = msEncrypt.ToArray();
}
}
}
 
 
// Return the encrypted bytes from the memory stream.
return encrypted;
 
}
 
internal static string DecryptStringFromBytes(byte[] cipherText, byte[] key, byte[] iv)
{
// Check arguments.
if (cipherText == null || cipherText.Length <= 0)
throw new ArgumentNullException("cipherText");
if (key == null || key.Length <= 0)
throw new ArgumentNullException("key");
if (iv == null || iv.Length <= 0)
throw new ArgumentNullException("key");
 
// Declare the string used to hold
// the decrypted text.
string plaintext;
 
// Create an RijndaelManaged object
// with the specified key and IV.
using (var rijAlg = new RijndaelManaged())
{
rijAlg.Key = key;
rijAlg.IV = iv;
 
// Create a decrytor to perform the stream transform.
ICryptoTransform decryptor = rijAlg.CreateDecryptor(rijAlg.Key, rijAlg.IV);
 
// Create the streams used for decryption.
using (var msDecrypt = new MemoryStream(cipherText))
{
using (var csDecrypt = new CryptoStream(msDecrypt, decryptor, CryptoStreamMode.Read))
{
using (var srDecrypt = new StreamReader(csDecrypt))
{
// Read the decrypted bytes from the decrypting stream
// and place them in a string.
plaintext = srDecrypt.ReadToEnd();
}
}
}
 
}
return plaintext;
}
 
internal static void EncryptFile(string inputPath, string outputPath, string password)
{
var input = new FileStream(inputPath, FileMode.Open, FileAccess.Read);
var output = new FileStream(outputPath, FileMode.OpenOrCreate, FileAccess.Write);
 
// Essentially, if you want to use RijndaelManaged as AES you need to make sure that:
// 1.The block size is set to 128 bits
// 2.You are not using CFB mode, or if you are the feedback size is also 128 bits
 
var algorithm = new RijndaelManaged {KeySize = 256, BlockSize = 128};
var key = new Rfc2898DeriveBytes(password, Encoding.ASCII.GetBytes(Salt));
 
algorithm.Key = key.GetBytes(algorithm.KeySize/8);
algorithm.IV = key.GetBytes(algorithm.BlockSize/8);
 
using (var encryptedStream = new CryptoStream(output, algorithm.CreateEncryptor(), CryptoStreamMode.Write))
{
CopyStream(input, encryptedStream);
}
}
 
internal static void DecryptFile(string inputPath, string outputPath, string password)
{
var input = new FileStream(inputPath, FileMode.Open, FileAccess.Read);
var output = new FileStream(outputPath, FileMode.OpenOrCreate, FileAccess.Write);
 
// Essentially, if you want to use RijndaelManaged as AES you need to make sure that:
// 1.The block size is set to 128 bits
// 2.You are not using CFB mode, or if you are the feedback size is also 128 bits
var algorithm = new RijndaelManaged {KeySize = 256, BlockSize = 128};
var key = new Rfc2898DeriveBytes(password, Encoding.ASCII.GetBytes(Salt));
 
algorithm.Key = key.GetBytes(algorithm.KeySize/8);
algorithm.IV = key.GetBytes(algorithm.BlockSize/8);
 
try
{
using (var decryptedStream = new CryptoStream(output, algorithm.CreateDecryptor(), CryptoStreamMode.Write))
{
CopyStream(input, decryptedStream);
}
}
catch (CryptographicException)
{
throw new InvalidDataException("Please supply a correct password");
}
catch (Exception ex)
{
throw new Exception(ex.Message);
}
}
 
private static void CopyStream(Stream input, Stream output)
{
using (output)
using (input)
{
byte[] buffer = new byte[SizeOfBuffer];
int read;
while ((read = input.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, read);
}
}
}
}
}

If you have found this post useful, please take the time to rate this post by clicking on the stars for this blog post, or to say thanks in the comments.

You can download source code for the AesApp here.

Reinventing the wheel


One of  the most significant challenges as a developer – especially when working as a consultant (though not mutually exclusive) -  is the need to understand a completely alien software system very very quickly, and to be productive in adding new functionality to that system, consequently value, to whatever the project may be. One must cultivate procedures and practices that allow for the rapid addition of development requests that are fit for purpose.

Probably the biggest hindrance in all software development is developers that suffer from “not invented here, I didn’t write it” syndrome. Frequently, I encounter developers that cannot work on a piece of code without completely re-writing it, frequently either ending up with exactly the same functionality as the code they replaced or reinventing a square wheel i.e. one finds that one is actually are worse off from a code maintainability standpoint.

The role of any Software Architect or Team Lead, is primarily to ensure that the overall architecture of  a software system is sound i.e. well designed and robust, lending itself to the addition of new functionality without breaking or changing existing code. This is by no means easy, and the challenges of building software systems are well known, but software architecture is now no longer in the dark ages, established design patterns that resolve most system issues are abundantly available. This most important part of any software application, I cannot but help but to draw attention to that fact, getting this wrong, with the knowledge that you will have developers of varying abilities and skills during the lifetime of the application, is all but a recipe for failure.

Design

The architecture of the application must be a policy and this must be enforced. The architecture must make it difficult for developers to just do what they want by requiring that they have to either take a few extra steps in their algorithm or think differently (but the same for everyone else) in resolving whatever they are tasked with. This however, must be a justified design decision, and not loose coupling just for the sake of loose coupling i.e.using interfaces between layers for example, usually forces composability from developers where they would usually use direct call backs. I have seen systems quickly turn into big balls of mud because of this, and the larger the system that bigger the ball of mud.

 

7299iuz6qug2mn

Since the overall architecture has been formalised, what remains is in adding the functionality of different rooms. The Drawing Room in the home above, must have all the features and components that allow it to function as such. This is analogous to any feature in a software system. It is difficult for your typical programmer to see a software system in its completeness like the house above, so it is the Architects responsibility to ensure they are aware of what is required of them. Most developers don’t like being restricted in how they are allowed to solve a problem, because most developers are not Architects, and are seldom able to design a coherent and robust system. Rewriting code someone else has written for the most part is usually an exercise in absolute futility. One cannot count the times that programmers habitually duplicate functionally that already exists, and how negatively this impacts a projects progression. It is very easy to lose weeks and even months of development time, all because developers were unable to augment existing code that was already working.

Sawdust

If you task a typical programmer to design and develop a piece of functionality in an application, that has specific functionality, the results are often surprising. Requirements for software usually result in a single piece of functionality, lets take a chair for example.

11151bguz98uw24

I am certain the audience knows what the prime functionality of a chair is so I shall not elaborate, why should software be different? The objective for any programmer is to get the requirements realised in as short a time, but as functional as possible. Most programmers never create a chair that is simple and functional but end up with this.

9481t87166dxi7

Really skilled developers will give you something like a Futon

product_detailed_image_29785_21

Most developers spend and incredible amount of time creating sawdust, separating logic and creating loosely coupled components when a lot of the time this is not required. Developers often try and use their glass ball imagining future use cases and try to always create components that can be extended to have dual or many other uses e.g. the Futon above. This is wrong! Oftentimes, all you want is a chair. All the requirements in the software specification ask for is a chair and the developer ended up creating a Futon, then are perplexed when they receive lukewarm responses or even complaints that they have designed something that was not required.

Single responsibility is frequently misunderstood, and is a prime reason that programmers rewrite existing code a lot . Usually the argument is that “I had to split the code into this class, and that class, and the other class because I wanted to create a software component where single responsibility was as clear as possible”. The result almost always is sawdust, and fine grained sawdust at that, because well architected software systems usually just need software components like the chair. It is sometimes hard for intelligent developers to accept that a lot of the code they write at times can be solved very simply, their overactive minds end up creating a software system that is bloated with classes and code that will never ever be used because you aren’t going to need it.

The Graduate

It seems almost unfair to include graduates as prime candidates for reinventing wheels, but they almost always are the people that do this the most, including Masters up to and including PhD students.

21585e5556z58a1

In any software team, you need bright and talented individuals, especially where you are solving domain specific problems. Their importance cannot be overstated here, but with this comes a lot of baggage. It can be very challenging managing a team of incredibly gifted developers. Inexperienced developers of any academic background can also be a drain in resources. A simple problem is usually solved by using the most complex algorithm, when something simpler was required. It seems inconceivable, but most of the hardest problems to manage in complex software systems are usually a result of the smartest people in a teams undue influence. If people are very smart they are hardly ever questioned to justify their choices. I have encountered software where you are told that users hate using the application because “you require a PhD to use the software”. This complexity invariably stems from the source code, thus, it is important to accept that smart individuals always find ways of solving the most difficult problems, but can also be the reason why some problems in complex software systems are difficult to solve because simplicity is seldom a prerequisite in resolving issues.

The long-and-the-short-of-it is that the goal of any software is for people to use it, and most success is based on simplicity and easy of use, which also is true for maintainable software systems. Working software seldom needs to be completely re-written because it works. Improvements can always be made to any software because there are no perfect software systems, so save yourself the time and effort by learning other peoples code rather than rewriting it, this will allow you to solve more problems, add more features and provide real value to a project.

Saturation/Desaturation with HLSL/Pixel Shaders and WPF


Setup and Configuration

I have been working a lot with image processing applications recently, and encountered a problem a little while ago where there was a algorithm slowing a portion of an application down, due to it performing pixel by pixel transformations in C# code. I was advised to seek correction of the problem, and fervidly look into HLSL as a possible solution, and was fortunate enough enough to stumble upon the Shader Effects & Build Tasks project on codeplex. The reason that HLSL was suggested was because HLSL code executes on the graphics card, is supremely efficient, lightning fast and can even execute in parallel.

Downloading and installing Shader Effects BuildTask and Templates.zip (you can also download the source instead if you want) greatly simplifies ones ability to work with Pixel Shaders, as it allows for Visual Studio integration. Make sure you install the ShaderBuildTaskSetup.msi and most importantly read the readme, as this will direct you on the location for unzipping the required templates that you will need to do if requisite templates are to be available for you in visual studio (very easy, just copy a few zipped files to a location on your machine)

Projects

Now that the required components are installed, create a new WPF project/solution in Visual Studio (I am using 2010 Ultimate, but this should work with Visual Studio 2008, the Express Editions or Silverlight) and call it SaturateDesaturate.

Add another project to this solution, this time a Shader Effect Library (ensure the templates linked to above are installed) and call it ShaderEffects.

ShaderEffects

This new shader effects library hooks up the plumbing required to easily create pixel shaders. Again, I would advise you read the readme in this new .dll project. Right click the file that ends in .fx , select properties, you should have the following

BuildAction

When you installed the build tasks and templates above, this key component was added to this type of .dll, so whenever you add a new effect file, ensure that the .fx file has it’s build action set to effect or else things will not work.

Code Sample

Delete the automatically added Effect1.cs and Effect1.fx make sure you do not delete EffectLibrary.cs as this will be used by any effects that you add.

Add a new shader effect to the ShaderEffects project and call it DesaturateEffect.. For brevities sake , copy the following code into it;

 

using System.Windows;

using System.Windows.Media;
using System.Windows.Media.Effects;
 
namespace ShaderEffects
{
public class DesaturateEffect : ShaderEffect
{
#region Constructors
 
static DesaturateEffect()
{
_pixelShader.UriSource = Global.MakePackUri("DesaturateEffect.ps");
}
 
public DesaturateEffect()
{
this.PixelShader = _pixelShader;
 
// Update each DependencyProperty that’s registered with a shader register.  This
// is needed to ensure the shader gets sent the proper default value.
UpdateShaderValue(InputProperty);
UpdateShaderValue(SaturationProperty);
}
 
#endregion
 
#region Dependency Properties
 
public Brush Input
{
get { return (Brush)GetValue(InputProperty); }
set { SetValue(InputProperty, value); }
}
 
// Brush-valued properties turn into sampler-property in the shader.
// This helper sets "ImplicitInput" as the default, meaning the default
// sampler is whatever the rendering of the element it’s being applied to is.
public static readonly DependencyProperty InputProperty =
RegisterPixelShaderSamplerProperty("Input", typeof(DesaturateEffect), 0);
 
 
 
public double Saturation
{
get { return (double)GetValue(SaturationProperty); }
set { SetValue(SaturationProperty, value); }
}
 
public static readonly DependencyProperty SaturationProperty =
DependencyProperty.Register("Saturation",
typeof(double),
typeof(DesaturateEffect),
new UIPropertyMetadata(1.0, PixelShaderConstantCallback(0)));
 
 
#endregion
 
#region Member Data
 
private static PixelShader _pixelShader = new PixelShader();
 
#endregion
 
}
}
 
 

This essentially hooks up the WPF piece to the HLSL piece, with the PixelShaderConstantCallback receiving the value from the HLSL.

HLSL

Copy the following code into your .fx file

//————————————————————————————–
//
// WPF ShaderEffect HLSL — DesaturateEffect
//
//————————————————————————————–
 
//—————————————————————————————–
// Shader constant register mappings (scalars – float, double, Point, Color, Point3D, etc.)
//—————————————————————————————–
 
float4 Saturation : register(c0);
 
//————————————————————————————–
// Sampler Inputs (Brushes, including ImplicitInput)
//————————————————————————————–
 
sampler2D implicitInputSampler : register(S0);
 
//————————————————————————————–
// Pixel Shader
//————————————————————————————–
 
float4 main(float2 uv : TEXCOORD) : COLOR
{
float3  LuminanceWeights = float3(0.299,0.587,0.114);
float4    srcPixel = tex2D(implicitInputSampler, uv);
float    luminance = dot(srcPixel,LuminanceWeights);
float4    dstPixel = lerp(luminance,srcPixel,Saturation);
//retain the incoming alpha
dstPixel.a = srcPixel.a;
return dstPixel;
 
}
 
 

This simple HLSL code will be applied to every pixel on the image using the graphics card and saturate/desaturate the image. In the MainWindow.xaml copy the following code;

<Window x:Class="SaturateDesaturate.MainWindow"
xmlns:shaders="clr-namespace:ShaderEffects;assembly=ShaderEffects"
Title="MainWindow" Height="600" Width="800">
<Grid>
<Grid.RowDefinitions>
<RowDefinition Height="*"/>
<RowDefinition Height="30"/>
</Grid.RowDefinitions>
<Image Source="/SaturateDesaturate;component/Images/Jellyfish.jpg" Grid.Row="0" >
<Image.Effect>
<shaders:DesaturateEffect Saturation="{Binding ElementName=slider, Path=Value}" />
</Image.Effect>
</Image>
 
<Slider x:Name="slider" Minimum="0" Maximum="6" Value="1" Grid.Row="1" />
</Grid>
</Window>

If you add an image and compare the saturation with a program like Paint.NET you will see that you have saturation added to your WPF images with a very few lines of code, and best of all it is very fast. I would also recommend you search codeplex for a shader effects library that contains many more shader effects and will help you get started and understand how they work.

I would also advise you to ensure that you remember to freeze all pixel shader instances (in your effect .cs files) and remove any static references (especially static constructors and shaders), as they are a prime candidate for memory leaks.

Original Image

 

Original

Paint.NET

 

Paint.NET200

HLSL Example

 

HLSL

Writing better code


A key factor to being a successful developer is writing good, solid, maintainable, error free code. It seems like a pretty obvious – albeit innocuous – statement, so why state the obvious? When you check code into the main branch of the source code for your organisation, you immediately expose your abilities and disabilities as a developer to the people you work with and for. If you check in a bad bit of code that contains bugs, it usually isn’t you that first notices the bug, but a colleague and you can look inept, especially if it is a silly mistake (and you have made a few of them – your boss may start thinking you are inapt to continue in your role in the company).

Writing good code is very difficult, especially when the code base is changing rapidly so you as a developer need tools that can make your life easier. The more experienced you become as a developer, the more you look to ways of making your life easier. Jetbrains have a visual studio add-in called ReSharper that is a little like FxCop, which the smarter developers amongst us use. If you are wondering why some developers seem to produce very clean code consistently that omits silly bugs (even on a Friday afternoon), you will more than likely find that they are using ReSharper.

I don’t doubt that a lot of people may dislike the fact that this takes over your IDE (this really is for the better though), in fact, If I recollect correctly I tried ReSharper a few times, then removed it (after a few word of mouth trials) every time because my IDE was less responsive, and because I was impetuous. It was only when I released a test application to a customer a few months ago (that kept crashing unexpectedly), when desperation set in, and drove me (frantically) looking for errors in my code. The main culprit  (in this specific instance) was a handful of null references in the codebase that had gone unnoticed, that ReSharper readily identified and I corrected, consequently achieving rapid application stability, over the course of a few days long code review. I’m not saying that all my problems were fixed, but most of the niggling issues were, with little to no effort.

It is not just null checks that ReSharper identifies, but a pretty comprehensive set of common coding mistakes that even the best programmers frequently make. The ‘Big Win’ is in that you get to correct problems there and then, as the code is written, before it becomes an issue. Visual Studio 2008 introduced background compilation that surreptitiously increases developer productivity by showing them errors as the code is typed, rather than waiting to press F5. ReSharper is orders of magnitude more helpful, if you use it correctly, you end up with a very clean codebase, with no unnecessary code, that has had common errors identified, giving you the developer confidence in your code.

To show a few basic examples

22-05-2010 15-16-47

Here I can remove the redundant delegate constructor call that the IDE generates. I can also remove the this keyword. The this keyword is rather abused in C# (Me in Visual Basic) and a lot of code samples (even from Microsoft) show developers misguided bliss with the word. One only ever needs to use the word when you have two variables named the same in a class (typically in a constructor) like this

22-05-2010 15-26-58

In most other cases it is redundant to use the word.

22-05-2010 15-21-35

In the sample above there are several issues. I have dead namespaces at the top, a poorly named control, two poorly named variables, that can also be made readonly. This is just an iota of the static runtime assistance that you get as a developer, and you can adopt the mantra of “writing a little code that does a lot”, rather than “writing a lot of code that does a lot”, with significantly less bugs.

I strongly encourage you to download this tool, acquaint yourself with it, and start working a little smarter (even on Friday afternoon).

Free WPF Themes


Update 1/10/2010 : Reuxables have now released some free themes including a lightweight version of Gemini (below) that has just the two colours. I have a sample application here showing the theme in an application I have just released.

Update : As rightly pointed out by a reader, Reuxables have stopped their free offer as they are now concentrating on the new .NET 4.0 controls and themes, including the elegant Gemini to be released soon.

Gemini

It seems the only freely available themes are the ones one codeplex. These were originally designed by the reuxables team, but I have lost faith in them with the withdrawal of the free themes they had available.

I think the best free themes at the moment are the Silverlight 4 themes, but they are yet to be ported to WPF.

Original Post

A little while back I noted that reuxables were gathering steam. Slightly earlier that year, we were all wowed by the Lawson Mango application

lawson_m3_cs_1

as this was a clear illustration that WPF could offer a vector based visual experience that left your ‘jaw on the floor’. Reuxables have now released a free version of the above theme called Inc

inc

I have been using this in my WPF applications, and have grown rather quite fond of it, principally because WPF themes can be a bit ‘too much’ for applications that one uses daily. You can get the theme here (there is also a free Metal theme [below] if that takes your fancy)

Metal

Improve the quality of your code with NDepend


Before a parachutist, mountain climber or formula one driver undertake their tasks, a number of checks need to be undertaken. These checks will assist in deciding whether the parachutist for instance will even jump out of the plane. If a typhoon is on the way, all are likely to wait until it has passed, the type of climbing gear the mountain climber will choose, must depend on the type of mountain they are to climb. What all these checks result in, is less surprises, in computer programming this is analogous to bugs.

The Ironbridge Shropshire

I know that architecture is ubiquitously used to describe engineering software, but the comparison is indeed an irresistible one. Take the Ironbridge in Shropshire for example, every element of this engineering feat must have been stress tested. The materials must have been tested to see how they react to temperature changes, weight and so forth. You can even find out how many rivets they used, and the length of the steel used in the bridge, all wonderful and useful information. Shropshire bridge (as I’m sure your well aware) is not the only bridge in the world, as there are a great many others that come in different shapes and sizes and age. The design process for a lot of these constructions requires a similar set of circumstances and considerations. A formula one motor car for example, still has four wheels as does a mini cooper. Writing software is no different, and most if not all software projects entail a similar set of circumstances and considerations, whether massive, big or small, you still want to deliver a fast, efficient, reliable and effective application.

I have recently started using NDepend, to quote from their website;

NDepend is a tool that simplifies managing a complex .NET code base. Architects and developers can analyze code structure, specify design rules, plan massive refactoring, do effective code reviews and master evolution by comparing different versions of the code

NDependMain

If you prefer the ribbon, you can change the user interface to suit

Ribbon

The way I look at my code has been changed forever. What you get is an abundance of relevant and pertinent information about your code, that you never thought possible. NDepend is about assisting you the developer (whether a single man outfit or team) to ensure that you are producing the highest quality code possible. There is so much information at your fingertips, it is overwhelming.

The Model View Controller, Model View Presenter and Model View View Model design patterns for example, are all about separating the concerns of your code, and increasing the testability of your code base. NDepend is another necessary tool, in that when you perform code reviews, it becomes an indispensable tool. Architects for a project can set rules for the code base, and easily check to see that they have been adhered to from a quality point-of-view, and developers themselves can ensure that they remove any sloppy code they’ve written, and spot anomalies and problem areas before they check in their code.

Here I have just loaded an assembly for an application that I wrote a while ago. I have a pane (one of many) that furnishes me with information about the number of lines of code I have for instance, comments, methods, fields and types.

ProjectLines

In another pane, you have information pertaining to code quality, methods that are too big, methods that are too complex, methods to refactor, methods with too many variables, methods with too many parameters, the list goes on and on. Each point an important code quality concern.

CodeQuality

In the same pane under design, you have suggestions that stateless types in the code could be changed to static types, or that a class without a descendant should be sealed, and again a myriad of other code quality issues that need to be addressed.

Design

Why should it be sealed? The sealed modifier is primarily used to prevent unintended derivation, but it also enables certain run-time optimizations. In particular, because a sealed class is known to never have any derived classes, it is possible to transform virtual function member invocations on sealed class instances into non-virtual invocations.  You have 5 classes here that can be changed to Structs, improving performance as they are now value types and not reference types from a .NET runtime standpoint.

You have Visual studio integration, Reflector integration and a wealth of other features.

VSIntegration

An issue any Software architect or Developer always runs into is performance and trying to ensure that their code is as optimised as possible, ensuring that their end users have the best possible experience when they use an application. I have always used this article every once in a while to ensure that my applications are structured correctly, and refer to it every now and again (most of the article is relevant to Silverlight and WPF as well). Clearly by using NDepend you will end up with better performing applications, and also more maintainable codebases, because all the ‘gremlins’ are ‘found out’ soon and the more you review your code with it, the better quality your code will be as a result.

NDepend is an essential new development tool, that I will blog about more, once I understand it better.

Unable to find manifest signing certificate in the certificate store


I have recently upgraded operating systems from Windows Vista to Windows 7 beta 1. When attempting to run a program that complied on Vista, I get the error message; Unable to find manifest signing certificate in the certificate store. [name of the project]

A brief Google search shows this is a common error, but no-one seems to have the error I do, as the suggestion is to edit the .cs.project file and remove the manifest signing section which I presently do not have.

The cause of this error is click once. Whenever you use click once, it creates a temporary strong key name like this one

tempkey1

 

 

 

 

 

 

 

 

 

 

It is this key that is missing, hence the compilation error. Luckily to create another temporary key is easy. Double click into the properties node in  the Visual Studio Solution Explorer, select the "signing" tab and click on Create Test Certificate…certificate1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Don’t bother entering a password, click on OK, and you should now find a new temporary key has been created and added, so your project will now compile.