Tuesday, June 23, 2015

How to Be an Organised Developer (and spend more time coding!)

As a developer your main focus is to write code. But over time, you'll find that there is a lot more to development than this. If you're not aware of this, you might one day wake up and realise that the notepad file you used for passwords and connection strings has gotten out of hand.

Being more organised from the start can help keep you focussed on coding and help you stay more efficient. If you move jobs you'll pick up a lot of information in the first few weeks, you'll want to organise it well from the start. If you stay in the same job for years, the 'other stuff' you accumulate can get messy and cumbersome.

Like good code, making an effort to organise yourself well from the start can pay dividends later on, when it comes to navigating and maintaining all your stuff. Here's a list of some of the "stuff" you'll find yourself accumulating as a developer, and how to keep it well organised.

OneNote (or Equivalent)

An essential tool for all developers.

The main thing this is useful for is storing essential information such as test data, database names, licence numbers for tools such as Resharper and Linqpad, and lists.

You can also use it for debugging information, screen grabs, functional specs, checklists and pretty much anything you can think of. OneNote allows you to organise all this quite effectively with its use of tabs and pages.

Note that by "equivalent", I don't mean Notepad++. You need your notes in one well organised and secure location (save your notebooks remotely), you don't want to have to worry about hitting "Save", and you also need to be able to paste in screen grabs, tables and other rich content. Evernote might come close but OneNote is trusted in enterprise environments which can give it the edge.

Logging Access

Pretty soon you're going to need to access logs for auditing, testing or debugging purposes. Remember to record all details in your note software.

Product Documentation/Wikis

As a developer you will probably be responsible for writing a lot of documentation. Make sure it's in an easily discoverable place and well maintained. Good code should be self-documenting, yes, but other stakeholders need to know what the code is doing from a non-technical perspective.

Bookmarks

You'll always have a selection of really important links. This might include:

  • Product documentation/wikis
  • Test Harnesses
  • ALM tools (TFS, Git)
  • Communication tools (Sharepoint, Trello)
  • Administration tools (Timesheets, financial, personnel software)
  • Development learning materials, tutorials, blog posts, communities etc. 
Figure out the best way to organise these based on your needs and make sure they're backed up and accessible everywhere (use Chrome/Firefox Sync or a bookmarks manager).

SQL Files

Most developers will probably have a collection of SQL files of common, useful queries, for logging, basic CRUD, etc. Make sure these are in an easy to access, secure, backed up and preferably remote location.

Macros

Specifically in Web Development, tools such as iMacros can be invaluable for automating frequent tasks such as logging in to test sites or running common actions on test harnesses. Remember, keep them well organised and backed up remotely using Dropbox/OneDrive etc.

Powershell Scripts and batch files

You might also have some PC management tasks that need to be automated. Powershell is fantastic for these kind of tasks, and is becoming even more useful with the advent of DSC.

Linqpad Scripts

Similarly, Linqpad allows for frequently used code to be stored and used in a lightweight manner without all the project overhead.

Code Toolbox

Finally, all developers should have a code toolbox, or a collection of libraries and code snippets they use regularly in their projects. This is a lot to go into so I will create a new blog on it, but basically it could consist of
  • Project templates 
  • Emailing library 
  • Cloud storage library 
  • Logging classes 
  • MVC Html helpers etc
The idea is to allow for Rapid Development by having everything you frequently need at your fingertips and not having to worry about finding code for frequently performed tasks such as input forms or membership features. As I said, I'll give this topic a blog of its own soon.

Conclusion

Hopefully this has inspired you to ditch that pile of text files and get your administrative stuff in order. Doing so will make you more efficient and help you focus on what's really important - writing code.

How do you organise yourself? Is there anything I've missed? Leave a comment!

Friday, May 29, 2015

Why Bother With Unit Tests?

I still see a lot of scepticism on WHY we should do Unit Tests and if they're really worth it.

Of course, they usually add time to the development cycle. They can be hard to create. Quite often, they seem pointless; Why do I need to check if my TwoTimesTwo() method returns 4? Of course it's going to!

Well, the benefits of Unit Tests may not always be obvious. Often they're disconnected from the problems caused by not doing them. For example, a manager is not likely to blame a Production issue on the fact that you didn't do Unit tests.

Most of the time, if you release some code that performs what it needs to, nobody cares about the quality. However, the people who will care, are those who want to maintain your code later, especially if it ends up being:

  1. You; or 
  2. A violent psychopath who knows where you live

Unit Tests can also help to document code, showing you what it should and shouldn't be doing. They can help you understand it better.

But maintainability still isn't enough for many people.

Personally I don’t think anyone can really understand the benefits of Unit Testing until they've written some, and felt the satisfaction of those green ticks, and the confidence they give you. Your code is now rock solid, nobody is going to break it without knowing about it (including yourself).

With Unit Tested code you can be sure that every component is doing what it should be. Acceptance Criteria is another step and I'm a strong proponent of automated behaviour driven integrated tests too. But Unit Tests can be more granular, covering a vast array of tiny details that can otherwise be easily overlooked and become the source of fiddly bugs later on. Also, when bugs are found, usually this results in messy, band aid fixes which make the code less flexible. TDD makes the code more maintainable to start with. The necessity to write SOLID code is increased by the need to ensure the code is highly modular and interface driven.

In the end, the extra time spent building unit tests is returned several times by spending less time fixing bugs and maintaining messy code.

The best way to see the benefits of Unit Tests is to write some. Start easy by writing one specifically to test a bug fix. You can then be sure that bug will never surface again.

Getting started is hard, don't get me wrong. It's like lifting weights for the first time, it's going to hurt. But push through the pain and you'll start to see the benefits. It will get easier.

Monday, May 11, 2015

Build 2015 - Highlights for .NET Web Developers

For those overwhelmed by the 22 pages of videos to come out of Build this year, here are the main highlights relevant to us as ASP.NET developers.

.NET Announcements at Build 2015
Including release candidates of the .net framework, ASP.NET and Visual Studio

Build Videos for ASP.NET web developers

Introducing ASP.NET 5
Deep Dive into ASP.NET 5
What's New in C# 6 and Visual Basic 14
Modern Web Tooling in Visual Studio 2015
Modern Web Tooling
A Lap Around .NET 2015
"Project Spartan": Introducing the New Browser and Web App Platform for Windows 10
Microsoft Edge
What’s New in F12 for "Project Spartan"
Using Git in Visual Studio

Friday, March 20, 2015

How to Reduce Deployment Risk

Functionality Switches

Feature flags controllable by configuration or an "admin tool", are an excellent way to deploy new features. When going live, you can turn on a feature only for your Production test user, and run some smoke tests, before turning the feature on for the wider world.

Any issues found later can then easily be mitigated by "switching off" the broken features for live users, while at the same time, leaving them on for the test users so that some fault investigation can be done on Production.

Providing there are no breaking regression changes, this will help to avoid rollbacks.

Good logging

Every significant action should be monitored and logged. Obviously this includes calls to external services, but you should also strive to implement logging based on items in the Acceptance criteria. Log results of actions so that you can see if they match expectations. This allows for granular diagnosis so you can see exactly what isn't working and where.

Reduced Functionality Instead of Errors

A good, defensive technique is to fall back to previous functionality when something fails, rather than giving the user an error. Obviously this depends on the scenario and this won't always be possible.

However, combined with functionality switches, this can allow users to continue to use your application while you identify a fault using your production test users. This is also greatly dependent on your logging, of course, as your fault will not be manifesting in the interface.

Thursday, March 12, 2015

Hololens Speculation: What Kind of Applications Can We Build?

I'm so excited about Hololens. Its creative potential is huge. There are a wide variety of applications we can develop for it.

I believe the demos shown so far are barely scratching the surface of what we can do with the Hololens. Once the imagination of the development community warms up, and its abilities are clear, we're going to see world changing ideas. Industries will be turned upside down, lives will be changed, and millionaires will be made.

Of course, I'm speculating on what it can and can't do based on the minimal demos we've seen so far, but, providing they turn out to be accurate, there is plenty to go with already. I'm going to make some assumptions on its abilities, but with this, highlight some of the many directions we can take in creating for the interface of the future.

Presence Sharing

One of the most interesting uses for the Hololens is allowing others to share your experience, and interact with it. This opens up many prospects for communication and collaboration in virtually every industry.

Think about how cool GoPro cameras are and what we're able to do with them. Now imagine seeing the video live and being able to interact with it.

This particular feature also has plenty of promise for gaming, with users interacting with the same reality, or the same game area.

Reality Overlays

Probably the main purpose of Hololens is its ability to overlay the imagined onto the real. We have seen some examples of this, and there is so much more to come. Interfaces will probably make up the bulk of these overlays, and there is plenty of scope for variety in how these work. But the possibilities go much further than just interfaces.

We've seen game characters and levels merge with the real world. What about augmenting our environment with photo-realistic people, new fixtures and furniture, or movie scenes? Or changing our atmosphere with movement, light, and sound? Virtual Reality might do this better. But having the real world still in view gives a certain edge to the atmosphere.

Gesture Controlled Augmentations

The use of gesture controlled interfaces might have been around for a while, but it takes on a new potency in the realm of artificial reality. Rather than interacting with things on a screen, you'll now be able to "physically" interact with "objects" in your world.

And while the recognisable gestures are still quite simple, this will only get more complex with time. As the Kinect technology gets better at recognising more fingers and more intricate movements, it will allow for more advanced control in much the same way as musical instruments or crafts.

Reality Recognition

Let's not forget that the Hololens cameras can be used to process the real world and the things in it. Measuring distances, identifying objects, detecting movement and more can all lend to the interaction.

How far can we take this real world processing? Imagine, tailoring your experience based on what the Hololens sees, such as popping up an information snippet of a visible landmark, finding your friends in a crowd, or identifying danger.

Voice

Like gesture controls, voice finds new opportunities in this context. It's part of the experience, narrating what you see, talking over the internet, recognising commands, sometimes recording. What else?

*

There are probably several more Hololens abilities that I've missed, so please leave a comment if you can think of any. Each of the items above has the potential for some groundbreaking new applications. Combine two or more concepts and the possibilities increase exponentially.

We just need to start thinking differently about what we can do. Rather than extrapolating our current computers and software to this new paradigm (which we should still do), we should try to think about what new possibilities all these new capabilities afford us, separately as well as combined.

I didn't want to get too deep into these possibilities, I really just wanted to highlight them, to get people thinking about how they might use each of them.

It'll probably take a few years for the world to realise what Augmented Reality can do for it. But all ideas are built on other ideas.

We're just getting started.

Tuesday, February 24, 2015

A Quick Summary of DevOps

What exactly is DevOps?

DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support. (http://theagileadmin.com/what-is-devops/)

Like Agile, it's a rather large topic, but the main point of it is that developers work much more closely with Operations to ensure a more resilient release cycle. It's also characterised by Operations using many of the same techniques as developers - for example: source control, automation, and of course the Agile methodology.

 Adopted by all the big tech companies, DevOps allows for hyper-efficient deployments with much faster releases and far less production issues. In fact, the highest performers use DevOps to do 30x more deployments, 8000 times faster, 2x more successful, with 12x faster Mean Time To Repair than the worst performers (those who aren't using DevOps).

As a result, Amazon deploy to Production every 11.6 seconds. And they have very few Production issues. How? DevOps.

There's a lot to it, but some of the basic concepts that allow for this level of efficiency include the following.

Smaller Changes, Deployed More Frequently

Some of the biggest disasters in IT history have come from projects that had the longest lead times. Usually this is due to one reason: When they went live, it was the first time the code had ever been in a Production environment. To be good at DevOps and achieve a high level of fast, reliable releases, you need to get good at releases by doing more of them, more often. This involves shifting to a model of smaller changes, and more efficient releases.

Do Painful things More Frequently

There needs to be easier access to the painful process of releasing to Production. This is in contrast to traditional thinking that tends to treat releases as big scary things that everyone has to make a big deal over. Being efficient at DevOps involves streamlining and improving the release process in the same way we have streamlined our Development and Testing practices over the past few years.

Common Build Mechanisms

Rather than having distinctly different environments and setups for Development, Test, and Production, there needs to be a concerted effort to make all these environments as similar as possible.

Configuration as Code

To make it easier to implement Common Build Mechanisms, our deployment artifacts such as configuration, build scripts, automation scripts etc should be treated in the same way as code and managed under Source Control.

Automation

Like regression testing, deployments and environment setups should be as fully automated as possible, to allow for frequent and error-free repetition.

Definition of Done to include "Works in a Production Environment"

To take DevOps seriously, the ability to deploy to Production needs to be built into work processes.

Targeted Deployments

Techniques such as feature flags and deploying changes to small subsets of users can allow an organisation to get started with faster deployments.

High Trust, Peer Review over Approval

To adopt DevOps, we need to free ourselves of the learned behaviour that low trust, approval barriers and long lead times make our products more stable. The evidence is that they don't. Also, in studies, higher trust and lower lead times are seen as an indicator of both quality and happiness. While there might be some barriers to overcome up front, these changes overall make everyone more confident, more productive, and more efficient.

More interaction between Development and Operations

The underlying concept of DevOps is more interaction between Development and Operations. While before, we had a wall of procedures and artifacts between the two departments where work was thrown over, the goal now is to "knock that wall over and build a bridge". This means Operations getting more involved in the process earlier on to create common build mechanisms, configurations and automation. And it also means Developers being more involved in the release process.

Conclusion

Our job as developers is to release to Production without breaking anything. To get better at that, we need to do it more. A lot more.

Much the same as the adoption of automated testing, work needs to be done upfront to set up the infrastructure to allow DevOps to be safe and efficient. But like automation testing, these efforts pay huge, ongoing dividends.

DevOps is being used by most of the large tech companies and is allowing them to become extremely competitive by extensively reducing their lead times and quality issues. Those who refuse to adopt DevOps risk being left far, far behind.
*
There is a lot more to DevOps that I haven't mentioned here. Learn more with this course.
http://www.microsoftvirtualacademy.com/training-courses/assessing-and-improving-your-devops-capabilities If you don't have much time, make sure you at least watch video 2.

Thursday, February 19, 2015

Multitasking with My Work in Visual Studio

In Visual Studio 2012+ there is a button in the Team Explorer you may not use as often as you should. "My Work" contains 4 sections.

In Progress Work

This contains your current "context", what you're currently working on, your pending code changes.

Suspended Work

This is the most interesting feature, I think. It allows you to change the context which you are working in so you can switch tasks on request.

Imagine you're deep into coding a feature and an urgent bug is raised that demands immediate attention. Your code might not even compile yet but you have to drop it and get a working setup.

Simple. Just hit "Suspend" and your current changes will be suspended. But not just your changes. This doesn't just save the current state of the source code, it saves EVERYTHING, breakpoints, watches, open windows.

When the bug fix is complete and checked in, simply drag your suspended work back into the "In Progress" section.

This very short video explains the feature in more detail.

Available Work Items

A handy query that shows all the work items currently assigned to you. This includes bugs automatically raised when your changes have broken the build. It can be filtered by Iteration and has a handy link for creating new tasks.

Perhaps most usefully, dragging a task into the "In Progress" section changes its status, which is reflected on the team scrum board.

Code Reviews

Code reviews you have requested or received display here, and various filters can be applied. You are doing code reviews using Visual Studio, aren't you?

Conclusion

In short, the My Work window provides a handy at-a-glance look at your current work in progress, as well as offering the extremely useful Suspended work feature which allows you to quickly and easily take on urgent tasks without losing your context.

Friday, February 13, 2015

The breakpoint will not currently be hit. No symbols have been loaded for this document

Here are some ways to fix the notorious "The breakpoint will not currently be hit. No symbols have been loaded for this document" issue in Visual Studio. Many of the steps may not apply to your situation, but I've tried to make this a comprehensive list that you can check whenever you get this issue.
  1. Get latest code from Source Control
  2. Compile.
  3. Run iisreset to reset IIS if you are using IIS.
  4. Attach to the appropriate IIS instance (or all of them).
  5. Add a breakpoint and run your application.
If you miss step 3, your breakpoint will not be hit, even though your assemblies has compiled and breakpoint is shown as active. You may need to run iisreset in an administrator command prompt to ensure you have the right permissions.

Some more ideas if the above doesn't work:

Check where your dll is being referenced from and ensure that is the code you're trying to debug.

Check you are in the correct mode when building: Debug/Release as they may put the dlls in different places.

Are you attached to the correct process?

For a website on IIS, is the code you're working on the same as the code running in IIS? 

Go to Debug > Windows > Modules and if the relevant dll is there, right click it and load symbols.
If it's not in the list, try running the code anyway. Sometimes even though it says the breakpoint will not be hit, it's only because the dll is not loaded until you enter a scenario that needs it. Try the scenario that depends on the dll, and it may just hit the breakpoint anyway.

Restart your browser. You might have something cached from an older dll.


Do you have any other tricks to get around this issue? Leave a comment!