Tuesday, 4 August 2015

How to mock file system in tests?

You are probably familiar with System.IO namespace in .NET Framework. It contains useful API for file system operations. Unfortunately, most of its methods are static and thus it is impossible to mock them. In this post I would like to show how we write testable code that works with file system.

Let's see the below simple class that copies a file to a destination directory. If the destination directory does not exist, it creates the directory and then copies the file to the destination.



In order to make our code testable we'll use System.IO.Abstractions library which can be installed from nuget:

package-install System.IO.Abstractions

It provides IFileSystem interface with all the useful interfaces on it:
It will allow us to refactor our class by injecting the IFileSystem into our constructor: As you can see from the above code, all the classes and methods on the System.IO.Abstractions are similar to those on the System.IO. So the transition is very smooth. You also noticed that the IFileSystem is injected via internal constructor, which we'll use in unit tests shortly. Public constructor will be used in our real code and it injects actual implementation of IFileSystem which is FileSystem class. The last part is the unit test. There is complementary library that contains in memory implementation of IFileSystem interface. It can be installed from nuget as well:
Install-Package System.IO.Abstractions.TestingHelpers

In addition to the implementation of IFileSystem interface, it contains convenient methods for adding files and directories (in memory, of course). So let's see how our unit test looks like: We can even assert, that the file was copied and it exists in the destination folder. The library also works well with all existing stream Read/Write operations.
Credits for the library go to: Tatham Oddie

Enjoy coding, Boris Modylevsky

Wednesday, 17 June 2015

How to improve your code quality over time?

Developers strive to write their code as good as they can. Then they try to refactor it in order to make it clearer and more maintainable. How we can make sure than it improves over time?

First of all we need to measure its "quality" in order to know what is its current level. Of course quality is not a well defined word. We need some quantitative metrics to measure quality. There are many metrics that could be defined, one of them is Code Coverage. Code Coverage tells us how many percent of the code is covered by tests, i.e. tests pass during their execution on that code. High Code Coverage indicates that the majority of the code is tested and works properly (at least as defined by tests). It does not mean that our software does not have bugs. But it gives a high level of confidence in what the software does. On the other hand low Code Coverage indicates that we don't know anything about the code, whether it works or not.

Let's see how we can measure Code Coverage using TeamCity's build in dotCover. On the the Build Steps page, click Edit on your test step, whether it is NUnit or MSTest: 










On the .NET Coverage section, select "JetBrains dotCover' in .NET Coverage tool. dotCover is a build in coverage tool within TeamCity. In the Filters section specify assemblies that you don't want to cover, for example your tests assembly.











Run the build and navigate to Code Coverage tab under build:














It will show the the total Code Coverage per class, method and statements with the option to drill down. The most important number is the percentage of Statements Code Coverage. In the below project it is 34.8%. Which means that approximately third of the code is covered by tests.













Now given the current level of our Code Coverage we would like to make sure it only increases. We are going to define that if code coverage decreases, the build should fail. It means code coverage is one of our failure conditions. In order to do that we navigate to Build Conditions page and click on "Add failure condition" under "Additional Failure Conditions" section. On the dialog that appears we define as follows and click Save.












Then you can define some more failure conditions on every metrics:











Having defined those settings we can assure that our code coverage will increase over time. But is it enough? Of course not, our team will not produce better code just because we set these automatic failure conditions. It need to be trained how to identify code smells, how to write clean code, how to write proper unit tests. It is a long process and requires good coaching.

Let's summarize what we had until now on how to improve your code quality:
1. Define metrics that important to you and measure their current state.
2. Make the measurement part of your continuous integration and define failure conditions on its decrease.
3. Train your team on how to write better code, write good unit tests.

I'd like to read your feedback on how it is done in your company. 

P.S. The screenshots above are taken from an OSS project I am contributing to called BizArk. Its continuous integration is generously hosted on Codebetter's TemCity server. TeamCity is provided for free for open source projects by JetBrains.





Tuesday, 17 February 2015

How to investigate memory leak in .NET applications

In .NET eco-system we are used that memory is managed for us. It is allocated and freed up when not in use anymore. We do not have to worry about it. But even in .NET we need to think about such things like memory and external resources.

I received a complain from one of our customers, that the memory usage of IIS goes to extreme. In addition the following exception was thrown:

Insufficient winsock resources available to complete socket connection initiation. An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full 127.0.0.1:8028

We started with attaching dotMemory profiler to the IIS process. Here is the screenshot of process's memory:
We can see from the above picture, that the amount of memory used grows over time. It looks like memory is not released. 

In order to understand what kind of objects are not being released, we took two snapshots during the profiling: one in the beginning of our profiling session and another one at its end. Then we compared those snapshots. From the comparison we selected Grouped the objects b namespace and then right-click and "Open Survived Objects". Survived objects are those that weren't released between the snapshots.



In the list of survived objects, we grouped them by Dominators and got the following picture:


From the above picture we understand that instances of TranparentProxy are not closed. It looks very connected to the "insufficient winsock resources" error we receive.

So now to the easy part: fix the code :-)
We wrapped any service call with a try/finally block. In the finally block we close the open proxy, which also closes the socket.

After applying the fix we executed dotMemory profiler to see how memory behaves. Here what we got:

The picture looks completely different: instead of steady increase, we see those drops in memory usage. The drops are of course result of Garbage Collection. 

Happy profiling!

UPD: As @volebamor mentioned in his tweet: "it's more obvious to open new objects instead of survived". Thanks for your comment!



Sunday, 8 February 2015

Slides for my talk "Continuous Integration in Action"

Slides for my recent talk at Clean Code Alliance meetup group titles "Continuous Integration in Action". During my talk I focused on the basic principles of  continuous integration and shared from own experience some tips for its successful and effective implementation.

Here are the slides:


Continuous Integration in Action from Boris Modylevsky

I would like to thank Itzik Saban for reviewing the slides and helping me rehearsing the session. It would not happen without you.



Tuesday, 7 October 2014

How to improve your ReSharper proficiency

Many developers that start working with ReSharper realize that the learning curve is steep. There are many hidden features, many shortcuts that need to be memorized, there are many tricks that one can do in order to improve his/her productivity. So here is what I did to help my colleagues to help them learn ReSharper.

I created a REST API that returns random ReSharper tips or trick. Its source code is available on github:

https://github.com/borismod/ReSharperTnT

I used AppHarbor for continuous integration and cloud hosting. So at the end, the REST API is available at the following URL:

http://resharpertnt.apphb.com/api/tipsandtricks/

It returns random ReSharper tip or trick in JSON format:









Then I incorporated the above REST API in our command-line build process:







So every time we build, we see random ReSharper tip or trick.

Enjoy coding!