Thursday, August 20, 2015

NuGet Like a Boss: Part 1 - Don't Check in Packages

After suffering like many with so many Package Restore woes in my projects I decided to make notes on the best way to deal with Nuget packages.

Ignore the packages folder

Not doing this means you check in the packages which are huge. This is annoying and kind of defeats the purpose of Nuget. When you ignore (and therefore don't check in) your packages folder, anyone getting your source code can run package restore on the solution and Nuget will download the packages automatically.

How?

First, add a file named .tfignore. This may require some Command prompt renaming as some set ups don't like files beginning with a dot. When you get past this annoyance, open the file in notepad and enter the following:

\packages

That tells TFS to ignore the packages folder. For some bizarre reason, this doesn't include the respositories.config file. You'll need to add a second line as follows:

!\packages\repositories.config

You'd think this would be it, but you may notice that your packages folder is already in your Pending changes. To get around this, create a folder called .nuget (command prompt trickery may be required) and in there create a file called NuGet.config. It must go in this folder, even if you have another NuGet.config at solution level. Enter the following text:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <solution>
    <add key="disableSourceControlIntegration" value="true" />
  </solution>
</configuration>

This should ensure that your packages stay out of source control.

Finally ensure that .tfignore and NuGet.config are added to source control so that these settings should apply for anyone using the project.

Gotcha!

Be aware that the .tfignore file may not work if someone has already checked in the packages folder. Delete this from source control and you should be good.

Monday, August 10, 2015

Developing for the Cloud - An Introduction


When you create a web application for the cloud, there are many things that need to be done differently. It's not just a case of saying "I'm doing cloud" and all you're really doing is putting it on someone else's VM. Doing this, the costs are much higher than if the application is designed with cloud in mind. This might be fine from an infrastructure point of view, but the cloud can have profound impacts on development from the ground up.

Azure allows us, and also forces us, to engineer our applications completely differently.

Forces? Well, this is because when you're hosting in an App Service plan, you're billed based on compute time and resource usage. This forces you to write more efficient code. You can't just write code that performs unnecessary operations and get away with it - the efficiency of code now directly translates to dollars. This demands that you think twice when writing code, espcially loops, to ensure you're being efficient. Caching becomes a major priority so that you're not hitting a database unless you absolutely have to.

With the weight of costs, you need to start thinking about more efficient ways of doing everything. Luckily Azure offers many services to help increase application efficiency. The most basic example is storing images and other static resources in Blob storage. This is fast, lightweight, and extremely cheap.

Scripts and CSS can be offloaded to localised CDNs. This also increases speed and significantly decreases cost.

Some data can shift from SQL to NoSQL models such as Table Storage, DocumentDB, or Graph data with its powerful relational features. An application might use all of these data types simultaneously for different purposes. For example, a shopping cart site might use SQL to store a user's identity information, Table storage to manage their shopping cart, DocumentDB to store product details, and Graph data to build "You might also like" links.

Need to run asynchronous/background tasks such as generating thumbnail images? Use Web jobs to run scripts, executable, and third party libraries completely outside of the typical web application Request/Response lifetime. Use queues to decouple logic from the main workflow, or use multiple worker threads to perform parallel processing.

We can also take advantage of Azure Search (powerful search services), Service bus (host web services locally but expose them via the cloud), Azure machine learning (predictive analytics, data mining, and AI), and Notification hubs (push messages down to clients).

Then, we can host our applications in App Service plans (instead of Virtual Machines) and take advantage of Visual Studio Online's build and Continuous Integration features. We can also leverage elastic scaling and Application Insights.

There are many more features in Azure and more appearing every day.

To summarise, developing for the cloud is very different from building a website that will be isolated on a local server. It's a far more distributed model where separation of concerns is much more pronounced than we're used to. Developers and architects will need to think differently when designing applications.

This is just an intro to cloud features. In the future I will go into cloud architectural design patterns which allow us to design our applications to be resilient in failure-heavy environments.

Tuesday, June 23, 2015

How to Be an Organised Developer (and spend more time coding!)

As a developer your main focus is to write code. But over time, you'll find that there is a lot more to development than this. If you're not aware of this, you might one day wake up and realise that the notepad file you used for passwords and connection strings has gotten out of hand.

Being more organised from the start can help keep you focussed on coding and help you stay more efficient. If you move jobs you'll pick up a lot of information in the first few weeks, you'll want to organise it well from the start. If you stay in the same job for years, the 'other stuff' you accumulate can get messy and cumbersome.

Like good code, making an effort to organise yourself well from the start can pay dividends later on, when it comes to navigating and maintaining all your stuff. Here's a list of some of the "stuff" you'll find yourself accumulating as a developer, and how to keep it well organised.

OneNote (or Equivalent)

An essential tool for all developers.

The main thing this is useful for is storing essential information such as test data, database names, licence numbers for tools such as Resharper and Linqpad, and lists.

You can also use it for debugging information, screen grabs, functional specs, checklists and pretty much anything you can think of. OneNote allows you to organise all this quite effectively with its use of tabs and pages.

Note that by "equivalent", I don't mean Notepad++. You need your notes in one well organised and secure location (save your notebooks remotely), you don't want to have to worry about hitting "Save", and you also need to be able to paste in screen grabs, tables and other rich content. Evernote might come close but OneNote is trusted in enterprise environments which can give it the edge.

Logging Access

Pretty soon you're going to need to access logs for auditing, testing or debugging purposes. Remember to record all details in your note software.

Product Documentation/Wikis

As a developer you will probably be responsible for writing a lot of documentation. Make sure it's in an easily discoverable place and well maintained. Good code should be self-documenting, yes, but other stakeholders need to know what the code is doing from a non-technical perspective.

Bookmarks

You'll always have a selection of really important links. This might include:

  • Product documentation/wikis
  • Test Harnesses
  • ALM tools (TFS, Git)
  • Communication tools (Sharepoint, Trello)
  • Administration tools (Timesheets, financial, personnel software)
  • Development learning materials, tutorials, blog posts, communities etc. 
Figure out the best way to organise these based on your needs and make sure they're backed up and accessible everywhere (use Chrome/Firefox Sync or a bookmarks manager).

SQL Files

Most developers will probably have a collection of SQL files of common, useful queries, for logging, basic CRUD, etc. Make sure these are in an easy to access, secure, backed up and preferably remote location.

Macros

Specifically in Web Development, tools such as iMacros can be invaluable for automating frequent tasks such as logging in to test sites or running common actions on test harnesses. Remember, keep them well organised and backed up remotely using Dropbox/OneDrive etc.

Powershell Scripts and batch files

You might also have some PC management tasks that need to be automated. Powershell is fantastic for these kind of tasks, and is becoming even more useful with the advent of DSC.

Linqpad Scripts

Similarly, Linqpad allows for frequently used code to be stored and used in a lightweight manner without all the project overhead.

Code Toolbox

Finally, all developers should have a code toolbox, or a collection of libraries and code snippets they use regularly in their projects. This is a lot to go into so I will create a new blog on it, but basically it could consist of
  • Project templates 
  • Emailing library 
  • Cloud storage library 
  • Logging classes 
  • MVC Html helpers etc
The idea is to allow for Rapid Development by having everything you frequently need at your fingertips and not having to worry about finding code for frequently performed tasks such as input forms or membership features. As I said, I'll give this topic a blog of its own soon.

Conclusion

Hopefully this has inspired you to ditch that pile of text files and get your administrative stuff in order. Doing so will make you more efficient and help you focus on what's really important - writing code.

How do you organise yourself? Is there anything I've missed? Leave a comment!

Friday, May 29, 2015

Why Bother With Unit Tests?

I still see a lot of scepticism on WHY we should do Unit Tests and if they're really worth it.

Of course, they usually add time to the development cycle. They can be hard to create. Quite often, they seem pointless; Why do I need to check if my TwoTimesTwo() method returns 4? Of course it's going to!

Well, the benefits of Unit Tests may not always be obvious. Often they're disconnected from the problems caused by not doing them. For example, a manager is not likely to blame a Production issue on the fact that you didn't do Unit tests.

Most of the time, if you release some code that performs what it needs to, nobody cares about the quality. However, the people who will care, are those who want to maintain your code later, especially if it ends up being:

  1. You; or 
  2. A violent psychopath who knows where you live

Unit Tests can also help to document code, showing you what it should and shouldn't be doing. They can help you understand it better.

But maintainability still isn't enough for many people.

Personally I don’t think anyone can really understand the benefits of Unit Testing until they've written some, and felt the satisfaction of those green ticks, and the confidence they give you. Your code is now rock solid, nobody is going to break it without knowing about it (including yourself).

With Unit Tested code you can be sure that every component is doing what it should be. Acceptance Criteria is another step and I'm a strong proponent of automated behaviour driven integrated tests too. But Unit Tests can be more granular, covering a vast array of tiny details that can otherwise be easily overlooked and become the source of fiddly bugs later on. Also, when bugs are found, usually this results in messy, band aid fixes which make the code less flexible. TDD makes the code more maintainable to start with. The necessity to write SOLID code is increased by the need to ensure the code is highly modular and interface driven.

In the end, the extra time spent building unit tests is returned several times by spending less time fixing bugs and maintaining messy code.

The best way to see the benefits of Unit Tests is to write some. Start easy by writing one specifically to test a bug fix. You can then be sure that bug will never surface again.

Getting started is hard, don't get me wrong. It's like lifting weights for the first time, it's going to hurt. But push through the pain and you'll start to see the benefits. It will get easier.

Friday, March 20, 2015

How to Reduce Deployment Risk

Functionality Switches


Feature flags controllable by configuration or an "admin tool", are an excellent way to deploy new features. When going live, you can turn on a feature only for your Production test user, and run some smoke tests, before turning the feature on for the wider world.

Any issues found later can then easily be mitigated by "switching off" the broken features for live users, while at the same time, leaving them on for the test users so that some fault investigation can be done on Production.

Providing there are no breaking regression changes, this will help to avoid rollbacks.

Good logging


Every significant action should be monitored and logged. Obviously this includes calls to external services, but you should also strive to implement logging based on items in the Acceptance criteria. Log results of actions so that you can see if they match expectations. This allows for granular diagnosis so you can see exactly what isn't working and where.

Reduced Functionality Instead of Errors


A good, defensive technique is to fall back to previous functionality when something fails, rather than giving the user an error. Obviously this depends on the scenario and this won't always be possible.

However, combined with functionality switches, this can allow users to continue to use your application while you identify a fault using your production test users. This is also greatly dependent on your logging, of course, as your fault will not be manifesting in the interface.

Tuesday, February 24, 2015

A Quick Summary of DevOps

What exactly is DevOps?

DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support. (http://theagileadmin.com/what-is-devops/)

Like Agile, it's a rather large topic, but the main point of it is that developers work much more closely with Operations to ensure a more resilient release cycle. It's also characterised by Operations using many of the same techniques as developers - for example: source control, automation, and of course the Agile methodology.

 Adopted by all the big tech companies, DevOps allows for hyper-efficient deployments with much faster releases and far less production issues. In fact, the highest performers use DevOps to do 30x more deployments, 8000 times faster, 2x more successful, with 12x faster Mean Time To Repair than the worst performers (those who aren't using DevOps).

As a result, Amazon deploy to Production every 11.6 seconds. And they have very few Production issues. How? DevOps.

There's a lot to it, but some of the basic concepts that allow for this level of efficiency include the following.

Smaller Changes, Deployed More Frequently

Some of the biggest disasters in IT history have come from projects that had the longest lead times. Usually this is due to one reason: When they went live, it was the first time the code had ever been in a Production environment. To be good at DevOps and achieve a high level of fast, reliable releases, you need to get good at releases by doing more of them, more often. This involves shifting to a model of smaller changes, and more efficient releases.

Do Painful things More Frequently

There needs to be easier access to the painful process of releasing to Production. This is in contrast to traditional thinking that tends to treat releases as big scary things that everyone has to make a big deal over. Being efficient at DevOps involves streamlining and improving the release process in the same way we have streamlined our Development and Testing practices over the past few years.

Common Build Mechanisms

Rather than having distinctly different environments and setups for Development, Test, and Production, there needs to be a concerted effort to make all these environments as similar as possible.

Configuration as Code

To make it easier to implement Common Build Mechanisms, our deployment artifacts such as configuration, build scripts, automation scripts etc should be treated in the same way as code and managed under Source Control.

Automation

Like regression testing, deployments and environment setups should be as fully automated as possible, to allow for frequent and error-free repetition.

Definition of Done to include "Works in a Production Environment"

To take DevOps seriously, the ability to deploy to Production needs to be built into work processes.

Targeted Deployments

Techniques such as feature flags and deploying changes to small subsets of users can allow an organisation to get started with faster deployments.

High Trust, Peer Review over Approval

To adopt DevOps, we need to free ourselves of the learned behaviour that low trust, approval barriers and long lead times make our products more stable. The evidence is that they don't. Also, in studies, higher trust and lower lead times are seen as an indicator of both quality and happiness. While there might be some barriers to overcome up front, these changes overall make everyone more confident, more productive, and more efficient.

More interaction between Development and Operations

The underlying concept of DevOps is more interaction between Development and Operations. While before, we had a wall of procedures and artifacts between the two departments where work was thrown over, the goal now is to "knock that wall over and build a bridge". This means Operations getting more involved in the process earlier on to create common build mechanisms, configurations and automation. And it also means Developers being more involved in the release process.

Conclusion

Our job as developers is to release to Production without breaking anything. To get better at that, we need to do it more. A lot more.

Much the same as the adoption of automated testing, work needs to be done upfront to set up the infrastructure to allow DevOps to be safe and efficient. But like automation testing, these efforts pay huge, ongoing dividends.

DevOps is being used by most of the large tech companies and is allowing them to become extremely competitive by extensively reducing their lead times and quality issues. Those who refuse to adopt DevOps risk being left far, far behind.
*
There is a lot more to DevOps that I haven't mentioned here. Learn more with this course.
http://www.microsoftvirtualacademy.com/training-courses/assessing-and-improving-your-devops-capabilities If you don't have much time, make sure you at least watch video 2.

Thursday, February 19, 2015

Multitasking with My Work in Visual Studio

In Visual Studio 2012+ there is a button in the Team Explorer you may not use as often as you should. "My Work" contains 4 sections.

In Progress Work


This contains your current "context", what you're currently working on, your pending code changes.

Suspended Work


This is the most interesting feature, I think. It allows you to change the context which you are working in so you can switch tasks on request.

Imagine you're deep into coding a feature and an urgent bug is raised that demands immediate attention. Your code might not even compile yet but you have to drop it and get a working setup.

Simple. Just hit "Suspend" and your current changes will be suspended. But not just your changes. This doesn't just save the current state of the source code, it saves EVERYTHING, breakpoints, watches, open windows.

When the bug fix is complete and checked in, simply drag your suspended work back into the "In Progress" section.

This very short video explains the feature in more detail.

Available Work Items


A handy query that shows all the work items currently assigned to you. This includes bugs automatically raised when your changes have broken the build. It can be filtered by Iteration and has a handy link for creating new tasks.

Perhaps most usefully, dragging a task into the "In Progress" section changes its status, which is reflected on the team scrum board.

Code Reviews


Code reviews you have requested or received display here, and various filters can be applied. You are doing code reviews using Visual Studio, aren't you?

Conclusion


In short, the My Work window provides a handy at-a-glance look at your current work in progress, as well as offering the extremely useful Suspended work feature which allows you to quickly and easily take on urgent tasks without losing your context.

Friday, February 13, 2015

The breakpoint will not currently be hit. No symbols have been loaded for this document

Here are some ways to fix the notorious "The breakpoint will not currently be hit. No symbols have been loaded for this document" issue in Visual Studio. Many of the steps may not apply to your situation, but I've tried to make this a comprehensive list that you can check whenever you get this issue.
  1. Get latest code from Source Control
  2. Compile.
  3. Run iisreset to reset IIS if you are using IIS.
  4. Attach to the appropriate IIS instance (or all of them).
  5. Add a breakpoint and run your application.
If you miss step 3, your breakpoint will not be hit, even though your assemblies has compiled and breakpoint is shown as active. You may need to run iisreset in an administrator command prompt to ensure you have the right permissions.

Some more ideas if the above doesn't work:

Check where your dll is being referenced from and ensure that is the code you're trying to debug.

Check you are in the correct mode when building: Debug/Release as they may put the dlls in different places.

Are you attached to the correct process?

For a website on IIS, is the code you're working on the same as the code running in IIS? 

Go to Debug > Windows > Modules and if the relevant dll is there, right click it and load symbols.
If it's not in the list, try running the code anyway. Sometimes even though it says the breakpoint will not be hit, it's only because the dll is not loaded until you enter a scenario that needs it. Try the scenario that depends on the dll, and it may just hit the breakpoint anyway.

Restart your browser. You might have something cached from an older dll.


Do you have any other tricks to get around this issue? Leave a comment!

Tuesday, November 25, 2014

Creating Responsive Tables


When building a responsive website where the main responsibility is to display data in tables, one needs to put some thought into how to present this data on smaller screens. Tables are, of course, as wide as the number of data columns, so this presents some challenges with how to display it when the view width is limited.

Using Bootstrap's table-responsive

Bootstrap's table-responsive class offers one solution, where the table is given a horizontal scrollbar. This is not ideal as it requires the user to scroll the table to see all the data in a row. This can be done by touching the table but this is not obvious and some users may think they need to use the scrollbar. This then reduces usability when there are more than a few rows in the table - the user would scroll to the bottom of the table before scrolling horizontally, leading to some frustrating vertical scrolling if the data they are looking at is back up at the top of the table.

Rotating the Data

The following is an alternative approach. The idea is to "rotate" the data so that columns become rows and rows become collections of rows. So this table:



Looks like this on a smaller screen:


Achieving this with CSS poses a couple of challenges. How do you lay out table header cells vertically, and flow the data cells in this way? Well obviously you can't, so you need to employ a couple of tricks.

The first thing to do is set up a simple table. It can use the bootstrap table class as well as our custom responsive-table class (note  this is different to the bootstrap table-responsive class).

<table class="table responsive-table">
   
<thead>
   
<tr>
       
<th>Number</th>
       
<th>First Name</th>
       
<th>Last Name</th>
       
<th>Address</th>
       
<th>Points</th>
   
</tr>
   
</thead>
   
<tbody>
   
<tr>
       
<td data-content="Number">1</td>
       
<td data-content="First Name">Sam</td>
       
<td data-content="Last Name">Smith</td>
       
<td data-content="Address">12 Smith Road</td>
       
<td data-content="Points">87</td>
   
</tr>
   
<tr>
       
<td data-content="Number">2</td>
       
<td data-content="First Name">Bob</td>
       
<td data-content="Last Name">Jones</td>
       
<td data-content="Address">99 Angle Street</td>
       
<td data-content="Points">43</td>
   
</tr>
   
<tr>
       
<td data-content="Number">3</td>
       
<td data-content="First Name">Terrence</td>
       
<td data-content="Last Name">Rogers</td>
       
<td data-content="Address">999 Letsby Avenue</td>
       
<td data-content="Points">85</td>
   
</tr>
   
<tr>
       
<td data-content="Number">4</td>
       
<td data-content="First Name">Lawrence</td>
       
<td data-content="Last Name">Burnfish</td>
       
<td data-content="Address">69 The Matrix</td>
       
<td data-content="Points">0</td>
   
</tr>
   
</tbody>
</table>

Of course in a real scenario, this data would be dynamic.

Note the use of the data attribute. We'll get to that shortly. Otherwise this is a basic table. No css is required for the full screen version, unless you want to add some formatting, as this is handled by bootstrap's table class.

The CSS

It's in the media query where the magic happens. You will need to decide on your breakpoint size depending on the expected minimum width of the table, which might depend on the number of columns, and width of the data. For this example, we'll set the breakpoint to 600 pixels.

@media only screen and (max-width: 600px) {
   
.responsive-table {
       
border-top: 1px solid #ccc;
   
}
   
   
.responsive-table th,
   
.responsive-table thead{
       
display: none;
   
}

    .responsive-table, .responsive-table tbody, .responsive-table tr, .responsive-table td {
       
display: block;
   
}
 
   
.responsive-table tr {
       
border-bottom: 1px solid #ccc; 
   
}
 
   
.responsive-table td {
        /* important to override bootstrap */
        padding-left: 50% !important;
       
border-top: 0 !important;       
        text-align: left;
       
position: relative;
       
border-bottom: 0;      
   
}
 
   
/* Now like a table header */
   
.responsive-table td:before {      
       
font-weight: bold;
       
font-size: 0.85em;       
       
position: absolute;
       
margin-left: -50%;
       
width: 50%;
       
white-space: nowrap;
       
content: attr(data-content);
   
}
}

Setting the display property of the tr and td elements to block forces the cells to flow vertically.

.responsive-table, .responsive-table tbody, .responsive-table tr, .responsive-table td {
       
display: block;
   
}

Each cell has left padding of 50% which matches the width and negative margin-left of our td:before pseudo element:

margin-left: -50%;       
width: 50%;

To display the table headers we're hiding the actual th elements and using the value of the data-content attribute of each table cell. This is done using the td:before pseudo element and by setting the content css property using attr(data-content).

.responsive-table td:before {      
       
font-weight: bold;
       
font-size: 0.85em;       
       
position: absolute;
       
margin-left: -50%;
       
width: 50%;
       
white-space: nowrap;
       
content: attr(data-content);
   
}

The beauty of using the data-* attribute is that you can set these in the html to something dynamic such a javascript or @razor variable. As they will be the same for each row, this  means they can be set within the loop which displays the rows.

Presentation

We're hiding the borders on the table cells and setting a border-bottom on the table rows, so that our data is neatly separated into a collection of rows.  Notice the table itself has a top border, just to frame it.

Browser compatibility

As this trick uses media queries, and media queries only work in IE9+, we only need to ensure the other features work in IE9+. (Below this the table will not refactor when the screen shrinks anyway).  Content, data-*, and pseudo elements all work in ie9, but unfortunately negative margins don't, and combined with position:absolute, which is essential, this causes a pretty significant issue.  




Of course this only becomes a problem when an IE9 user (currently about 2% globally) reduces their browser window, and won't be a problem on actual mobile devices which this technique is targetted for, so it's up to you whether you decide to mitigate this by putting the media query into a conditional statement.

Wrapping text in titles

Watch out for cell titles with more than one word. These could wrap when the screen size is reduced and will wrap onto the next line if you have white-space set to normal in the td:before css. One way around this is to make your table cell heights 2em or more, although this obviously means all sub-rows will be double height even when there isn't a large title.

Conclusion

Refactoring tables to display vertially can be a very useful alternative technique for presenting wide tables on smaller devices. The technique may require tailoring to your specific data shape and presentation requirements, but overall the tables are easy to read and interact with.


Resources