Thursday 3 December 2015

How to access TeamCity REST API from C#?

If you work with JetBrains TeamCity as your Continuous Integration server, you probably know that it has rich REST API that allows querying builds, their configuration, builds queue and also perform CRUD operation on them. It might be extremely useful for building custom monitor, triggering builds, cleaning builds queue and many other activities.

In this post I would like to present FluentTc: an easy to use library for all the above operations. When I started working on this library, I had in mind, that it should has fluent, easy to discover API. That's what I like in libraries I consume, so I developed a library that looks like I like it and I hope that you r will find it also usable and easy to use.

In order to get started to use it, you need to install the recent package from Manage Nuget packages -> Search FluentTc

or typing the below in your Package Manager Console of

install-package FluentTc 

So now, having the reference, let's get to the code.

In order to get all the build of specific Build Configuration, you can simply use:


If you are familiar with TeamCity REST API you might know that when retrieving list of entities, it returns only some basic properties of those entities, Id, Name and some Href. It also provides an option to retrieve additional properties. And this is how it can be done using FluentTc:



The above methods return default amount of builds as retrieved from TeamCity REST API. If there is large amount of builds, you'd probably prefer retrieving them with paging, i.e. a few builds each time. This is how it can be achieved with FluentTc:



Of course you can apply different query filters when retrieving your builds. For example, in order to retrieve not personal, pinned, successful builds, that ran during the recent day under specific build configuration, use this:


Credits:
FluentTc was inspired by Paul Stack's TeamCitySharp library.

For questions/suggestions feel free to comment below.

Sunday 15 November 2015

How to add some fun to software development?

A few days ago I gave a talk titled "10 ways to add fun to development process" at the ALM User Group in Microsoft Raanana. I would like to thank Elad Avneri, the group organizer, for inviting me to the group. During the talk I mentioned some tools and extensions and in this post I will summarize them.

For those of you, who missed the session, here a brief overview of it:

The development process is tedious and complex, we are trying to solve problems in different domains in limited time frames. So a little bit fun and sense of humor can relax the atmosphere in the team and make the work more productive.

Here are some tools that may help you to add some fun to your development process:





Tuesday 4 August 2015

How to mock file system in tests?

You are probably familiar with System.IO namespace in .NET Framework. It contains useful API for file system operations. Unfortunately, most of its methods are static and thus it is impossible to mock them. In this post I would like to show how we write testable code that works with file system.

Let's see the below simple class that copies a file to a destination directory. If the destination directory does not exist, it creates the directory and then copies the file to the destination.



In order to make our code testable we'll use System.IO.Abstractions library which can be installed from nuget:

package-install System.IO.Abstractions

It provides IFileSystem interface with all the useful interfaces on it:
It will allow us to refactor our class by injecting the IFileSystem into our constructor: As you can see from the above code, all the classes and methods on the System.IO.Abstractions are similar to those on the System.IO. So the transition is very smooth. You also noticed that the IFileSystem is injected via internal constructor, which we'll use in unit tests shortly. Public constructor will be used in our real code and it injects actual implementation of IFileSystem which is FileSystem class. The last part is the unit test. There is complementary library that contains in memory implementation of IFileSystem interface. It can be installed from nuget as well:
Install-Package System.IO.Abstractions.TestingHelpers

In addition to the implementation of IFileSystem interface, it contains convenient methods for adding files and directories (in memory, of course). So let's see how our unit test looks like: We can even assert, that the file was copied and it exists in the destination folder. The library also works well with all existing stream Read/Write operations.
Credits for the library go to: Tatham Oddie

Enjoy coding, Boris Modylevsky

Wednesday 17 June 2015

How to improve your code quality over time?

Developers strive to write their code as good as they can. Then they try to refactor it in order to make it clearer and more maintainable. How we can make sure than it improves over time?

First of all we need to measure its "quality" in order to know what is its current level. Of course quality is not a well defined word. We need some quantitative metrics to measure quality. There are many metrics that could be defined, one of them is Code Coverage. Code Coverage tells us how many percent of the code is covered by tests, i.e. tests pass during their execution on that code. High Code Coverage indicates that the majority of the code is tested and works properly (at least as defined by tests). It does not mean that our software does not have bugs. But it gives a high level of confidence in what the software does. On the other hand low Code Coverage indicates that we don't know anything about the code, whether it works or not.

Let's see how we can measure Code Coverage using TeamCity's build in dotCover. On the the Build Steps page, click Edit on your test step, whether it is NUnit or MSTest: 










On the .NET Coverage section, select "JetBrains dotCover' in .NET Coverage tool. dotCover is a build in coverage tool within TeamCity. In the Filters section specify assemblies that you don't want to cover, for example your tests assembly.











Run the build and navigate to Code Coverage tab under build:














It will show the the total Code Coverage per class, method and statements with the option to drill down. The most important number is the percentage of Statements Code Coverage. In the below project it is 34.8%. Which means that approximately third of the code is covered by tests.













Now given the current level of our Code Coverage we would like to make sure it only increases. We are going to define that if code coverage decreases, the build should fail. It means code coverage is one of our failure conditions. In order to do that we navigate to Build Conditions page and click on "Add failure condition" under "Additional Failure Conditions" section. On the dialog that appears we define as follows and click Save.












Then you can define some more failure conditions on every metrics:











Having defined those settings we can assure that our code coverage will increase over time. But is it enough? Of course not, our team will not produce better code just because we set these automatic failure conditions. It need to be trained how to identify code smells, how to write clean code, how to write proper unit tests. It is a long process and requires good coaching.

Let's summarize what we had until now on how to improve your code quality:
1. Define metrics that important to you and measure their current state.
2. Make the measurement part of your continuous integration and define failure conditions on its decrease.
3. Train your team on how to write better code, write good unit tests.

I'd like to read your feedback on how it is done in your company. 

P.S. The screenshots above are taken from an OSS project I am contributing to called BizArk. Its continuous integration is generously hosted on Codebetter's TemCity server. TeamCity is provided for free for open source projects by JetBrains.





Tuesday 17 February 2015

How to investigate memory leak in .NET applications

In .NET eco-system we are used that memory is managed for us. It is allocated and freed up when not in use anymore. We do not have to worry about it. But even in .NET we need to think about such things like memory and external resources.

I received a complain from one of our customers, that the memory usage of IIS goes to extreme. In addition the following exception was thrown:

Insufficient winsock resources available to complete socket connection initiation. An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full 127.0.0.1:8028

We started with attaching dotMemory profiler to the IIS process. Here is the screenshot of process's memory:
We can see from the above picture, that the amount of memory used grows over time. It looks like memory is not released. 

In order to understand what kind of objects are not being released, we took two snapshots during the profiling: one in the beginning of our profiling session and another one at its end. Then we compared those snapshots. From the comparison we selected Grouped the objects b namespace and then right-click and "Open Survived Objects". Survived objects are those that weren't released between the snapshots.



In the list of survived objects, we grouped them by Dominators and got the following picture:


From the above picture we understand that instances of TranparentProxy are not closed. It looks very connected to the "insufficient winsock resources" error we receive.

So now to the easy part: fix the code :-)
We wrapped any service call with a try/finally block. In the finally block we close the open proxy, which also closes the socket.

After applying the fix we executed dotMemory profiler to see how memory behaves. Here what we got:

The picture looks completely different: instead of steady increase, we see those drops in memory usage. The drops are of course result of Garbage Collection. 

Happy profiling!

UPD: As @volebamor mentioned in his tweet: "it's more obvious to open new objects instead of survived". Thanks for your comment!



Sunday 8 February 2015

Slides for my talk "Continuous Integration in Action"

Slides for my recent talk at Clean Code Alliance meetup group titles "Continuous Integration in Action". During my talk I focused on the basic principles of  continuous integration and shared from own experience some tips for its successful and effective implementation.

Here are the slides:


Continuous Integration in Action from Boris Modylevsky

I would like to thank Itzik Saban for reviewing the slides and helping me rehearsing the session. It would not happen without you.