Thursday, December 17, 2015

Microsoft Certified Solutions Developer!

After 3 years, 3 exams, and an excessive amount of study I have achieved the Microsoft Certified Solutions Developer - Web Applications certification.

I realise that many people don't put a huge amount of emphasis on exams, valuing real world experience instead. But I have found that what I learned studying for these exams has been extremely valuable to my profession, helping me make more informed decisions in my day to day work.

There is no substitute for real world experience, but real world experience leaves a lot of gaps and doesn't always teach the best habits. The exams have enabled me to fill in these gaps and develop a real technical proficiency which I may not have had without them.

Update: 
Born to Learn just posted this blog talking about a recent study which found 4 advantages certified staff have over non-certified staff, there are some interesting findings! https://borntolearn.mslearn.net/b/weblog/archive/2016/01/25/four-solid-reasons-to-hire-certified-employees

Tuesday, October 27, 2015

Amazing Things You Can Do With Azure Storage Queues

Load Balancing and Workload distribution

Queues can be used to organise the distribution of a workload across multiple processors.

A common technique is to post messages to a queue which are a key to a task which needs to be performed, and have multiple worker processes pick these tasks from the queue to perform in parallel.

This can be combined with other storage mechanisms to perform complex tasks. Say you were trying to render an animation. The queue could be loaded with keys, which point to an entity in Table storage. This entity could contain the Url of an image stored in blob storage. A worker process would pick up the message from the queue, reference the entity from the table and download the image from blob storage. It could then perform compute intensive tasks on the image while other worker processes picked up other images.

In this way, cloud storage can be used to augment worker roles and perform powerful parallel operations.

Asynchronous Processing and Temporal Decoupling

It might sound like something you do with a sonic screwdriver, but temporal decoupling simply means that senders and receivers don't need to be sending and receiving messages at the same time. With messages stored on the queue, they can be processed when the receiver is ready to do so, and the sender is not held up.

This can be useful for performing background tasks which are not critical to the main operation of a program, for example, a web application, rather than providing a slow response, can just queue up a thumbnail generation process and respond to the user quickly. A game, which requires the utmost UI responsiveness, might store replay data asynchronously using queues.

Thursday, August 20, 2015

NuGet Like a Boss: Part 1 - Don't Check in Packages

After suffering like many with so many Package Restore woes in my projects I decided to make notes on the best way to deal with Nuget packages.

Ignore the packages folder


Not doing this means you check in the packages which are huge. This is annoying and kind of defeats the purpose of Nuget. When you ignore (and therefore don't check in) your packages folder, anyone getting your source code can run package restore on the solution and Nuget will download the packages automatically.

How?


First, add a file named .tfignore. This may require some Command prompt renaming as some set ups don't like files beginning with a dot. When you get past this annoyance, open the file in notepad and enter the following:

\packages

That tells TFS to ignore the packages folder. For some bizarre reason, this doesn't include the respositories.config file. You'll need to add a second line as follows:

!\packages\repositories.config

You'd think this would be it, but you may notice that your packages folder is already in your Pending changes. To get around this, create a folder called .nuget (command prompt trickery may be required) and in there create a file called NuGet.config. It must go in this folder, even if you have another NuGet.config at solution level. Enter the following text:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <solution>
    <add key="disableSourceControlIntegration" value="true" />
  </solution>
</configuration>

This should ensure that your packages stay out of source control.

Finally ensure that .tfignore and NuGet.config are added to source control so that these settings should apply for anyone using the project.

Gotcha!


Be aware that the .tfignore file may not work if someone has already checked in the packages folder. Delete this from source control and you should be good.

Monday, August 10, 2015

Developing for the Cloud - An Introduction


When you create a web application for the cloud, there are many things that need to be done differently. It's not just a case of saying "I'm doing cloud" and all you're really doing is putting it on someone else's VM. Doing this, the costs are much higher than if the application is designed with cloud in mind. This might be fine from an infrastructure point of view, but the cloud can have profound impacts on development from the ground up.

Azure allows us, and also forces us, to engineer our applications completely differently.

Forces? Well, this is because when you're hosting in an App Service plan, you're billed based on compute time and resource usage. This forces you to write more efficient code. You can't just write code that performs unnecessary operations and get away with it - the efficiency of code now directly translates to dollars. This demands that you think twice when writing code, espcially loops, to ensure you're being efficient. Caching becomes a major priority so that you're not hitting a database unless you absolutely have to.

With the weight of costs, you need to start thinking about more efficient ways of doing everything. Luckily Azure offers many services to help increase application efficiency. The most basic example is storing images and other static resources in Blob storage. This is fast, lightweight, and extremely cheap.

Scripts and CSS can be offloaded to localised CDNs. This also increases speed and significantly decreases cost.

Some data can shift from SQL to NoSQL models such as Table Storage, DocumentDB, or Graph data with its powerful relational features. An application might use all of these data types simultaneously for different purposes. For example, a shopping cart site might use SQL to store a user's identity information, Table storage to manage their shopping cart, DocumentDB to store product details, and Graph data to build "You might also like" links.

Need to run asynchronous/background tasks such as generating thumbnail images? Use Web jobs to run scripts, executable, and third party libraries completely outside of the typical web application Request/Response lifetime. Use queues to decouple logic from the main workflow, or use multiple worker threads to perform parallel processing.

We can also take advantage of Azure Search (powerful search services), Service bus (host web services locally but expose them via the cloud), Azure machine learning (predictive analytics, data mining, and AI), and Notification hubs (push messages down to clients).

Then, we can host our applications in App Service plans (instead of Virtual Machines) and take advantage of Visual Studio Online's build and Continuous Integration features. We can also leverage elastic scaling and Application Insights.

There are many more features in Azure and more appearing every day.

To summarise, developing for the cloud is very different from building a website that will be isolated on a local server. It's a far more distributed model where separation of concerns is much more pronounced than we're used to. Developers and architects will need to think differently when designing applications.

This is just an intro to cloud features. In the future I will go into cloud architectural design patterns which allow us to design our applications to be resilient in failure-heavy environments.

Tuesday, June 23, 2015

How to Be an Organised Developer (and spend more time coding!)

As a developer your main focus is to write code. But over time, you'll find that there is a lot more to development than this. If you're not aware of this, you might one day wake up and realise that the notepad file you used for passwords and connection strings has gotten out of hand.

Being more organised from the start can help keep you focussed on coding and help you stay more efficient. If you move jobs you'll pick up a lot of information in the first few weeks, you'll want to organise it well from the start. If you stay in the same job for years, the 'other stuff' you accumulate can get messy and cumbersome.

Like good code, making an effort to organise yourself well from the start can pay dividends later on, when it comes to navigating and maintaining all your stuff. Here's a list of some of the "stuff" you'll find yourself accumulating as a developer, and how to keep it well organised.

OneNote (or Equivalent)

An essential tool for all developers.

The main thing this is useful for is storing essential information such as test data, database names, licence numbers for tools such as Resharper and Linqpad, and lists.

You can also use it for debugging information, screen grabs, functional specs, checklists and pretty much anything you can think of. OneNote allows you to organise all this quite effectively with its use of tabs and pages.

Note that by "equivalent", I don't mean Notepad++. You need your notes in one well organised and secure location (save your notebooks remotely), you don't want to have to worry about hitting "Save", and you also need to be able to paste in screen grabs, tables and other rich content. Evernote might come close but OneNote is trusted in enterprise environments which can give it the edge.

Logging Access

Pretty soon you're going to need to access logs for auditing, testing or debugging purposes. Remember to record all details in your note software.

Product Documentation/Wikis

As a developer you will probably be responsible for writing a lot of documentation. Make sure it's in an easily discoverable place and well maintained. Good code should be self-documenting, yes, but other stakeholders need to know what the code is doing from a non-technical perspective.

Bookmarks

You'll always have a selection of really important links. This might include:

  • Product documentation/wikis
  • Test Harnesses
  • ALM tools (TFS, Git)
  • Communication tools (Sharepoint, Trello)
  • Administration tools (Timesheets, financial, personnel software)
  • Development learning materials, tutorials, blog posts, communities etc. 
Figure out the best way to organise these based on your needs and make sure they're backed up and accessible everywhere (use Chrome/Firefox Sync or a bookmarks manager).

SQL Files

Most developers will probably have a collection of SQL files of common, useful queries, for logging, basic CRUD, etc. Make sure these are in an easy to access, secure, backed up and preferably remote location.

Macros

Specifically in Web Development, tools such as iMacros can be invaluable for automating frequent tasks such as logging in to test sites or running common actions on test harnesses. Remember, keep them well organised and backed up remotely using Dropbox/OneDrive etc.

Powershell Scripts and batch files

You might also have some PC management tasks that need to be automated. Powershell is fantastic for these kind of tasks, and is becoming even more useful with the advent of DSC.

Linqpad Scripts

Similarly, Linqpad allows for frequently used code to be stored and used in a lightweight manner without all the project overhead.

Code Toolbox

Finally, all developers should have a code toolbox, or a collection of libraries and code snippets they use regularly in their projects. This is a lot to go into so I will create a new blog on it, but basically it could consist of
  • Project templates 
  • Emailing library 
  • Cloud storage library 
  • Logging classes 
  • MVC Html helpers etc
The idea is to allow for Rapid Development by having everything you frequently need at your fingertips and not having to worry about finding code for frequently performed tasks such as input forms or membership features. As I said, I'll give this topic a blog of its own soon.

Conclusion

Hopefully this has inspired you to ditch that pile of text files and get your administrative stuff in order. Doing so will make you more efficient and help you focus on what's really important - writing code.

How do you organise yourself? Is there anything I've missed? Leave a comment!

Friday, May 29, 2015

Why Bother With Unit Tests?

I still see a lot of scepticism on WHY we should do Unit Tests and if they're really worth it.

Of course, they usually add time to the development cycle. They can be hard to create. Quite often, they seem pointless; Why do I need to check if my TwoTimesTwo() method returns 4? Of course it's going to!

Well, the benefits of Unit Tests may not always be obvious. Often they're disconnected from the problems caused by not doing them. For example, a manager is not likely to blame a Production issue on the fact that you didn't do Unit tests.

Most of the time, if you release some code that performs what it needs to, nobody cares about the quality. However, the people who will care, are those who want to maintain your code later, especially if it ends up being:

  1. You; or 
  2. A violent psychopath who knows where you live

Unit Tests can also help to document code, showing you what it should and shouldn't be doing. They can help you understand it better.

But maintainability still isn't enough for many people.

Personally I don’t think anyone can really understand the benefits of Unit Testing until they've written some, and felt the satisfaction of those green ticks, and the confidence they give you. Your code is now rock solid, nobody is going to break it without knowing about it (including yourself).

With Unit Tested code you can be sure that every component is doing what it should be. Acceptance Criteria is another step and I'm a strong proponent of automated behaviour driven integrated tests too. But Unit Tests can be more granular, covering a vast array of tiny details that can otherwise be easily overlooked and become the source of fiddly bugs later on. Also, when bugs are found, usually this results in messy, band aid fixes which make the code less flexible. TDD makes the code more maintainable to start with. The necessity to write SOLID code is increased by the need to ensure the code is highly modular and interface driven.

In the end, the extra time spent building unit tests is returned several times by spending less time fixing bugs and maintaining messy code.

The best way to see the benefits of Unit Tests is to write some. Start easy by writing one specifically to test a bug fix. You can then be sure that bug will never surface again.

Getting started is hard, don't get me wrong. It's like lifting weights for the first time, it's going to hurt. But push through the pain and you'll start to see the benefits. It will get easier.

Friday, March 20, 2015

How to Reduce Deployment Risk

Functionality Switches

Feature flags controllable by configuration or an "admin tool", are an excellent way to deploy new features. When going live, you can turn on a feature only for your Production test user, and run some smoke tests, before turning the feature on for the wider world.

Any issues found later can then easily be mitigated by "switching off" the broken features for live users, while at the same time, leaving them on for the test users so that some fault investigation can be done on Production.

Providing there are no breaking regression changes, this will help to avoid rollbacks.

Good logging

Every significant action should be monitored and logged. Obviously this includes calls to external services, but you should also strive to implement logging based on items in the Acceptance criteria. Log results of actions so that you can see if they match expectations. This allows for granular diagnosis so you can see exactly what isn't working and where.

Reduced Functionality Instead of Errors

A good, defensive technique is to fall back to previous functionality when something fails, rather than giving the user an error. Obviously this depends on the scenario and this won't always be possible.

However, combined with functionality switches, this can allow users to continue to use your application while you identify a fault using your production test users. This is also greatly dependent on your logging, of course, as your fault will not be manifesting in the interface.

Tuesday, February 24, 2015

A Quick Summary of DevOps

What exactly is DevOps?

DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support. (http://theagileadmin.com/what-is-devops/)

Like Agile, it's a rather large topic, but the main point of it is that developers work much more closely with Operations to ensure a more resilient release cycle. It's also characterised by Operations using many of the same techniques as developers - for example: source control, automation, and of course the Agile methodology.

 Adopted by all the big tech companies, DevOps allows for hyper-efficient deployments with much faster releases and far less production issues. In fact, the highest performers use DevOps to do 30x more deployments, 8000 times faster, 2x more successful, with 12x faster Mean Time To Repair than the worst performers (those who aren't using DevOps).

As a result, Amazon deploy to Production every 11.6 seconds. And they have very few Production issues. How? DevOps.

There's a lot to it, but some of the basic concepts that allow for this level of efficiency include the following.

Smaller Changes, Deployed More Frequently

Some of the biggest disasters in IT history have come from projects that had the longest lead times. Usually this is due to one reason: When they went live, it was the first time the code had ever been in a Production environment. To be good at DevOps and achieve a high level of fast, reliable releases, you need to get good at releases by doing more of them, more often. This involves shifting to a model of smaller changes, and more efficient releases.

Do Painful things More Frequently

There needs to be easier access to the painful process of releasing to Production. This is in contrast to traditional thinking that tends to treat releases as big scary things that everyone has to make a big deal over. Being efficient at DevOps involves streamlining and improving the release process in the same way we have streamlined our Development and Testing practices over the past few years.

Common Build Mechanisms

Rather than having distinctly different environments and setups for Development, Test, and Production, there needs to be a concerted effort to make all these environments as similar as possible.

Configuration as Code

To make it easier to implement Common Build Mechanisms, our deployment artifacts such as configuration, build scripts, automation scripts etc should be treated in the same way as code and managed under Source Control.

Automation

Like regression testing, deployments and environment setups should be as fully automated as possible, to allow for frequent and error-free repetition.

Definition of Done to include "Works in a Production Environment"

To take DevOps seriously, the ability to deploy to Production needs to be built into work processes.

Targeted Deployments

Techniques such as feature flags and deploying changes to small subsets of users can allow an organisation to get started with faster deployments.

High Trust, Peer Review over Approval

To adopt DevOps, we need to free ourselves of the learned behaviour that low trust, approval barriers and long lead times make our products more stable. The evidence is that they don't. Also, in studies, higher trust and lower lead times are seen as an indicator of both quality and happiness. While there might be some barriers to overcome up front, these changes overall make everyone more confident, more productive, and more efficient.

More interaction between Development and Operations

The underlying concept of DevOps is more interaction between Development and Operations. While before, we had a wall of procedures and artifacts between the two departments where work was thrown over, the goal now is to "knock that wall over and build a bridge". This means Operations getting more involved in the process earlier on to create common build mechanisms, configurations and automation. And it also means Developers being more involved in the release process.

Conclusion

Our job as developers is to release to Production without breaking anything. To get better at that, we need to do it more. A lot more.

Much the same as the adoption of automated testing, work needs to be done upfront to set up the infrastructure to allow DevOps to be safe and efficient. But like automation testing, these efforts pay huge, ongoing dividends.

DevOps is being used by most of the large tech companies and is allowing them to become extremely competitive by extensively reducing their lead times and quality issues. Those who refuse to adopt DevOps risk being left far, far behind.
*
There is a lot more to DevOps that I haven't mentioned here. Learn more with this course.
http://www.microsoftvirtualacademy.com/training-courses/assessing-and-improving-your-devops-capabilities If you don't have much time, make sure you at least watch video 2.

Thursday, February 19, 2015

Multitasking with My Work in Visual Studio

In Visual Studio 2012+ there is a button in the Team Explorer you may not use as often as you should. "My Work" contains 4 sections.

In Progress Work

This contains your current "context", what you're currently working on, your pending code changes.

Suspended Work

This is the most interesting feature, I think. It allows you to change the context which you are working in so you can switch tasks on request.

Imagine you're deep into coding a feature and an urgent bug is raised that demands immediate attention. Your code might not even compile yet but you have to drop it and get a working setup.

Simple. Just hit "Suspend" and your current changes will be suspended. But not just your changes. This doesn't just save the current state of the source code, it saves EVERYTHING, breakpoints, watches, open windows.

When the bug fix is complete and checked in, simply drag your suspended work back into the "In Progress" section.

This very short video explains the feature in more detail.

Available Work Items

A handy query that shows all the work items currently assigned to you. This includes bugs automatically raised when your changes have broken the build. It can be filtered by Iteration and has a handy link for creating new tasks.

Perhaps most usefully, dragging a task into the "In Progress" section changes its status, which is reflected on the team scrum board.

Code Reviews

Code reviews you have requested or received display here, and various filters can be applied. You are doing code reviews using Visual Studio, aren't you?

Conclusion

In short, the My Work window provides a handy at-a-glance look at your current work in progress, as well as offering the extremely useful Suspended work feature which allows you to quickly and easily take on urgent tasks without losing your context.

Friday, February 13, 2015

The breakpoint will not currently be hit. No symbols have been loaded for this document

Here are some ways to fix the notorious "The breakpoint will not currently be hit. No symbols have been loaded for this document" issue in Visual Studio. Many of the steps may not apply to your situation, but I've tried to make this a comprehensive list that you can check whenever you get this issue.
  1. Get latest code from Source Control
  2. Compile.
  3. Run iisreset to reset IIS if you are using IIS.
  4. Attach to the appropriate IIS instance (or all of them).
  5. Add a breakpoint and run your application.
If you miss step 3, your breakpoint will not be hit, even though your assemblies has compiled and breakpoint is shown as active. You may need to run iisreset in an administrator command prompt to ensure you have the right permissions.

Some more ideas if the above doesn't work:

Check where your dll is being referenced from and ensure that is the code you're trying to debug.

Check you are in the correct mode when building: Debug/Release as they may put the dlls in different places.

Are you attached to the correct process?

For a website on IIS, is the code you're working on the same as the code running in IIS? 

Go to Debug > Windows > Modules and if the relevant dll is there, right click it and load symbols.
If it's not in the list, try running the code anyway. Sometimes even though it says the breakpoint will not be hit, it's only because the dll is not loaded until you enter a scenario that needs it. Try the scenario that depends on the dll, and it may just hit the breakpoint anyway.

Restart your browser. You might have something cached from an older dll.


Do you have any other tricks to get around this issue? Leave a comment!