Learning Git

As you may have noticed: Team Foundation Server not only supports TFVC but also supports Git. Now the big question is: why? Microsoft says that TFVC is not going away but that there are scenarios where one or the other is a better choice.

I’m by no means a Git expert so after reading a very interesting email discussion within the ALM Rangers on Git I realized I need to know more about Git.

So here are a couple of links to get you started if you want to learn Git:

Is Git going to be the future? I don’t know. But Git is definitely popular in some circles and an extra tool in your belt is always a good idea.

What is your take on Git? Do you like it? Use it? Hate it? How have you learned to use Git?

My favorite books: The Phoenix Project

Have you ever read a computer book that was a real page turner? Well, although I love to read, reading technical books is mostly not a real page turner event.
But the Phoenix Project is different! While working with the ALM Rangers on a new project, Sam Guckenheimer mentioned the Phoenix Project as a great book so I ordered my copy and couldn’t stop reading.

The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win


The Phoenix Project is a novel about a company that, if you work in the IT or in a company that uses IT (so practically everyone) will recognize.

It shows the struggles of Parts Unlimited to keep up with competition in a world that keeps changing. In the end, Parts Unlimited adopts a DevOps way of working and learns quite while implementing it.

What I found surprising about this book was how much I can relate to everything that’s happening. You almost feel the pain of another deployment going wrong or a manager trying to push his little side project to the top of the line.

And then you see how they turn things around and you start feeling better.

If you work in an IT company that resembles the way Parts Unlimited works, this book is a must read. Not only for yourself but for your manager and every other person in your company.

Now the interesting question is, how can you apply all the principles from this book while working with products like Visual Studio, Team Foundation Server and Azure but that’s for another blog Winking smile

Have you read the Phoenix Project? Did you like it? Or do you know of similar books that are worth reading? Please leave a comment!

Optimizing Team Foundation Server Build Time

Fast feedback is important. Knowing you broke something a month ago or just a few minutes ago can make a huge difference.

More and more teams use a build server to compile the code and run a variety of checks on each check in. This is often referred to as the commit stage. One nice feature of TFS to make sure the quality of your code base is guarded, is a Gated Check-in. A Gated Check-means the whole commit stage is executed on some code a developer wants to check in. Only when the commit stage succeeds, the check in is allowed.

This way, you can be sure that the code on your server is always in a good state. An important characteristic of successful gated check-ins is that they are fast. If developers have to wait a long time, they will start checking in less regularly and find ways around your build server to share code with other developers.

How can you optimize the build time of your Gated check-in?

What’s acceptable?

An acceptable build time depends on your team. However, from experience you should aim for a Gated Check-in build of less then five minutes. Somewhere around one to two minutes would be perfect but that’s often hard to realize. It’s important to regularly check your build time and make sure you constantly optimize whenever necessary.

1. Check your hardware

The Team Foundation Server build architecture consists of two important parts:

  • Build Controllers
  • Build Agents

A Build Controller connect to Team Foundation Server. They monitor your build queue and contain the logic of executing the build workflow. The actual build runs on a Build Agent. This means that  Build Controllers and Build Agents have different resource needs.

A Controller is memory intensive. An Agent is both CPU and Disk intensive. When you do a default installation of a TFS Build Server, both the Agent and the Controller are installed on the same machine. This is an easy setup and requires less servers but if build speed becomes an issue, an easy solution is just to scale out your build hardware. Moving your agents to a different server then your controller is a first and easy step.

Instead of scaling out you can also scale up by using a machine with more CPU cores, more memory and faster disks. I was at a customer once where their Build took 90 minutes to run. Because they where constrained on their on-premises hardware, we moved the build server to Azure. By using a larger VM size on Azure the build time dropped to 15 minutes. That’s a very quick win.

2. What are you building?

When you create a new standard Build Definition in Visual Studio, you get a build that monitors all the code in your team project. This is easy to get started since you probably only have a single source tree with one solution file in it that contains your code.

This is defined in your Source Settings in your Build Definition file as you can see in the following image:

image

Here you see there are two work folders defined. One points to your source code and is marked as Active. The Cloaked working folder makes sure that your Drops folder (which is created by a build that copies the resulting binaries to the TFS server) is not monitored.

When the Build server starts a new Build it will get a fresh copy (in the following section you will see if this is incremental or from scratch) of all the source files beneath your work folder mapping.
If you start getting multiple branches or other folders that contain documentation or other non-essential stuff for your build, the build server will download all those files every time a build runs.

Modifying your Source Settings to only include the files you really need can speed up your build time quite substantially.

3. How are you getting your sources?

If you create a new Build Definition you will see the following setting in the Process tab:

image

What’s important to note is that the Clean workspace property is set to true. This means that every time a build runs, all the source files are first removed from the Build Server and then the whole source tree is downloaded.

By setting this value to false, you will do an incremental get, just as you would in Visual Studio. The Build Server will only download the files that have changed since the last build.

Now of course, this is a nice default. Having a clean slate every time you run a build makes sure you won’t have any artifacts left from previous builds.

But you know your code and your team best. You can experiment with setting this value to false and checking what it does to your build time.

4. Where are you getting your sources from?

Normally you connect your Build Server directly to Team Foundation Server. This means that when the Build Server needs to download a newer version of a file, it will go directly to the TFS server and download it from their.

By using a Team Foundation Server Proxy you can use an in-between cache that sits between your Build Server and the TFS Server. The proxy will cache source files and optimizes getting files from TFS. Especially when your Build Server and TFS Server are not co-located, a proxy can save you huge amounts of time by installing it nearby your Build Server (on a controller machine or on a separate server in the same location as your Build infrastructure).

See the following MSDN article for configuration details: Configure Team Foundation Build Service to Use Team Foundation Server Proxy

5. How are you building?

There is a huge change your Build Agent is running on a multi-core machine. MSBuild, which is used in the Build Template to do the actual compiling of your code, has an option to start using multiple cores and parallelizing your build process.

If you look at MSDN you will see the following documentation:

/maxcpucount[:number]
 
/m[:number] Specifies the maximum number of concurrent processes to use when building. If you don’t include this switch, the default value is 1. If you include this switch without specifying a value, MSBuild will use up to the number of processors in the computer. For more information, see Building Multiple Projects in Parallel with MSBuild.
The following example instructs MSBuild to build using three MSBuild processes, which allows three projects to build at the same time:
msbuild myproject.proj /maxcpucount:3

By adding the /m option to your TFS Build you will start utilizing multiple cores to execute your build. You can add this option in your Process tab for the MSBuild arguments property. This is also a setting you have to test and make sure it works for you. Sometimes you will get Access Denied errors because multiple processes are trying to write to the same folder.

6. Building less

When you are dealing with a huge application that maybe exists of several sub applications, you can start splitting your application in distinct parts that can be build separately. By using NuGet and your own NuGet repository you can then have those subparts publish a precompiled and ready to go NuGet package. This package can then be used by other applications that depend on it.

This means that you won’t have to build all your source code every time you run a build. Instead you only build those parts that have changed and reuse the resulting NuGet packages in other parts of your build.

If you have a look at the NuGet documentation you will find some easy steps that allow you to create a package and setup your own NuGet server.

And these are my favorite steps for optimizing a TFS build. What are your favorite steps? Anything that I missed? Please leave a comment!

My SDN presentations

Last week I had two presentations and one ask the experts at SDN. If you are in the Netherlands, SDN is a great event to visit!

What you may have missed in C#

C# 6 is coming with a bunch of new features that are pretty exciting. However, have you already mastered all the previous versions of C#? In this session you will take a swift tour across all versions of C# first, making sure that you’re up to date on everything, followed by an in-depth focus on the new features of C# 6. Finally you will learn how Roslyn, the new C# compiler, will change your live as a C# developer.

Slides

Getting some Insight with Application Insight

Do you know what your customers are doing? Can you respond quickly to incidents? Do you know the favorite features of your customer? This session will show you how to get insight into these things and more. You will learn how to use Application Insight to monitor your web, native and desktop applications. Through code examples and real world scenarios you will see what Application Insights can offer you and how you can start using it right away.

Slides

Ask the experts

At the end of the day Hassan Fadili (my friend and fellow ALM enthusiast) had an hour of Ask The Experts. We had some interesting discussions ranging from PowerShell Desired State Configuration and Release Management to build server optimizations.
All in all, it was a great day!

If you are interested in having a .NET or ALM session at your company, send me a mail or leave a comment!

Adding Code Metrics to your Team Foundation Server 2013 Build

When implementing a Deployment Pipeline for your application, the first step is the Commit phase. This step should do as many sanity checks on your code as possible in the shortest amount of time.Later steps will actually deploy your application and start running all kinds of other tests.

One check I wanted to add to a Commit phase was calculating the Code Metrics for the code base. Code Metrics do a static analysis on the quality of your code and help you pinpoint those types or methods that have potential problems. You can find more info on Code Metrics at MSDN.

Extending your Team Foundation Server Build

Fortunately for us, TFS uses a workflow based process template to orchestrate builds. This workflow is based on Windows Workflow Foundation and you can extend it by adding your own (custom) activities to it.

If you have a look at GitHub you’ll find a lot of custom created activities that you can use in your own templates. One of those is the Code Metric activity that uses the Code Metric Powertool to calculate Code Metrics from the command line.

If you check the documentation, using the Code Metric activity comes down to downloading the assemblies, storing them in version control and then adding the custom activity to your build template.

And that would be true if you wouldn’t be running on Visual Studio/Team Foundation Server 2013. For example, check the following line of code on GitHub:

string metricsExePath = Path.Combine(ProgramFilesX86(), 
       @"Microsoft Visual Studio 11.0Team Tools
         Static Analysis ToolsFxCopmetrics.exe");

This code still points to the old version of the Code Metrics Powertool. There where also some other errors in the Activity. For example, setting FailBuildOnError to false won’t have any effect.

Fortunately, all the activities are open source. Changing the path was easy. Fixing the FailBuildOnError bug was a little harder since it’s impossible (to my knowledge) to debug the custom activities directly on the Build server.

But there is a NuGet package for that!

But as a good developer, we first create a unit test that shows the bug really exist. By fixing the unit test, we then fix our bug. Unit testing Workflow activities is made a lot easier with the Microsoft.Activities.UnitTesting NuGet package.

Using this NuGet package I came up with the following ‘integration’ test:

[TestMethod]
[DeploymentItem("Activities.CodeMetrics.DummyProject.dll")]
public void MakeSureABuildDoesNoFailWhenFailBuildOnErrorIsFalse()
{
    var activity = new CodeMetrics();

    var buildDetailMock = new Mock<IBuildDetail>();
    buildDetailMock.SetupAllProperties();

    var buildLoggingExtensionMock = new Mock<IBuildLoggingExtension>();

    var host = WorkflowInvokerTest.Create(activity);
    host.Extensions.Add<IBuildDetail>(() => buildDetailMock.Object);
    host.Extensions.Add<IBuildLoggingExtension>(() => 
           buildLoggingExtensionMock.Object);
    host.InArguments.BinariesDirectory =                             
                          TestContext.DeploymentDirectory;
    host.InArguments.FilesToProcess = new List<string> 
    { 
       "Activities.CodeMetrics.DummyProject.dll" 
    };

    host.InArguments.LinesOfCodeErrorThreshold = 25;
    host.InArguments.LinesOfCodeWarningThreshold = 20;

    host.InArguments.MaintainabilityIndexErrorThreshold = 60;
    host.InArguments.MaintainabilityIndexWarningThreshold = 80;

    host.InArguments.FailBuildOnError = false;

    try
    {
        // Act
        host.TestActivity();

        Assert.AreEqual(BuildStatus.PartiallySucceeded,  
                 buildDetailMock.Object.Status);
    }
    finally
    {
        host.Tracking.Trace();
    }
}

I’ve configured the Code Metrics activity to run the analysis against a Dummy project dll with some threshold settings and of course the FailBuildOnError set to false. Fixing the test is left as an exercise to the reader 😉

Extending the Build Template

As a final step I’ve added parameters to configure the different thresholds and some other important settings to the Build workflow. That way, a user can configure the Code Metrics activity by editing the Build Definition:

image

And that’s it! You can download the code with the modified activity code, the workflow unit test and a copy of an implemented build workflow template here.

Useful? Feedback? Please leave a comment!

Accessing TFS from behind a proxy

Lately I’ve been working with a client with very strict security rules. One of their policies is that all internet traffic runs through a proxy. This causes some problems when accessing a remote Team Foundation Server over HTTP.

One of the issues we ran into was using Excel to connect to Team Foundation Server to select queries and edit work items.

Excel gave the following error:

enter image description here

As you can see, at the bottom a 407 Proxy Authentication error is mentioned. This error happens because by default programs like Excel and Visual Studio are not configured to use your proxy settings when connecting to another application.

Proxy settings

Configuring a .NET application can be done by editing the app.config file. If you look at the MSDN documentation you’ll see that you can use the defaultProxy attribute:

<defaultProxy enabled="true|false" useDefaultCredentials="true|false" 
    <bypasslist> … </bypasslist> 
    <proxy> … </proxy> 
    <module> … </module> 
/>

The most important settings for our situation are the enabled and useDefaultCredentials attribute. Both should be set to true to make sure that Excel uses the proxy settings when connecting to TFS.

You can also specify a proxy element that specifies which proxy can be used:

<proxy usesystemdefault="True" bypassonlocal="True"/>

By setting usesystemdefault to True, your application will use the proxy settings that are configured in Internet Explorer. In corporate environments these settings are most often configured through a group policy so all computers have the correct settings.

Combining these will give you the following:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>   
    <system.net>
        <defaultProxy enabled="true" 
                      useDefaultCredentials="true">       
            <proxy usesystemdefault="True"
         bypassonlocal="True"/>
  </defaultProxy>
     </system.net>
</configuration>

Configuring Excel

When you have a 32bit version of Excel 2013 installed, your executable file can be found in:

C:\Program Files (x86)Microsoft OfficeOffice15

If you have a different version of Excel (like 2007), you have to look in your Microsoft Office folder for the correct version number.

To add the proxy configuration, add a new text file named: excel.app.config with the following content:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>   
    <system.net>
        <defaultProxy enabled="true" 
                      useDefaultCredentials="true">       
            <proxy usesystemdefault="True"
         bypassonlocal="True"/>
  </defaultProxy>
     </system.net>
</configuration>

Make sure you restart Excel so that it loads the new settings. And that’s all you have to do. After these settings, Excel will use your proxy configuration and allow you to connect to Team Foundation Server.

Questions? Feedback? Please leave a comment!

Release Management: The specified path, file name, or both are too long

While working with Release Management a new deployment suddenly gave an error:

ERROR: The deployment for release template ‘XXXX’ has not been completed successfully. Check the releases history for more details.
Exit: 1

The release history didn’t gave much information but the Event Viewer showed the following:

The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.: rnrn   at System.IO.PathHelper.GetFullPathName()

So what’s happening here?

When a new release starts, the Deployment Agent on your server copies all required data from your Release Management server to a local folder on your deployment server. For example, if you have a component MyDatabase that is located in the root of your build package, you can use a forward slash in your component configuration to tell Release Management to look for the component at the root of your build drop location.

image

Now all the data in your build folder gets copied over to the deployment server and gets stored at:

C:Users<UserNameForDeploymenyAgent>AppDataLocalTempReleaseManagement<ComponentName><VersionNumber>

The problem I was having was that the customer not only had a database project but also a really long path and filename in a web project. The website got published to _PublishedWebsitesWebsiteName and got copied to the deployment server. That together with the temp folder and the component name was way to long.

Now off course we could have shortened the path name. But the underlying problem was that not only our database was copied but the complete build package. This is a waist of time and resources.

Splitting the build package

To resolve the issue, you can change the build to output each project in his own folder. This way, the build drop location would contain a folder MyDatabase with only the database files. It would also contain a specific folder for the website with the published website files.

You can configure this by adding the following MSBuild argument to your Build Definition:

/p:GenerateProjectSpecificOutputFolder=true

image

Now the structure of your build drop location changes and contains a separate folder for each project. This means that a folder MyDatabase is created with all the database files in it. Changing your component to point to MyDatabase now makes sure that only the required data gets copied to your deployment server.

What about Output location?

In TFS 2013 a new option Output location was added to the Build Definition. This property can be set to:

  • SingleFolder
  • PerProject
  • AsConfigured

PerProject does not do what you initially think. The PerProject option creates a single folder for each project that you specify under the Build –> Projects option at 2.1. This means that if you build two solutions as part of your build, SingleFolder will combine the results of those two solutions into one folder while PerProject will output a folder for each solution.

AsConfigured won’t copy the build output anywhere. Instead, you can attach your own scripts that copy the build output from your build location to your drop folder. This gives you total freedom. In this case however, using the GenerateProjectSpecificOutputFolder MSBuild argument did the trick.

Comments? Feedback? Questions? Please leave a comment!

Why I am a Microsoft ALM Ranger

Do you know who the ALM Rangers are? Here is a snippet describing what mission the ALM Rangers have:

The Visual Studio ALM Rangers provide professional guidance, practical experience and gap-filling solutions to the ALM community.

This means that we as Rangers try to help you implement a professional Application Lifecycle Management strategy. We do this by creating guidance such as the TFS Planning and Disaster Avoidance and Recovery Guide which helps you in installing Team Foundation Server, tooling like the Unit Test Generator and articles and whitepapers like this one Feature Toggles at MSDN Magazine.

As you can see, the Rangers provide a lot of resources! You can find the complete list at ALM Rangers Solutions.

Proud to be an Active ALM Ranger!

The Rangers are a large group of Microsoft and non-Microsoft people who have a strong passion for ALM and TFS. When you join the Rangers, you start out as an associate Ranger. At the end of every year, all ALM Rangers contributions are counted and Rangers are promoted or scrubbed depending on their activity.

This process happens during the month June at the end of the year at Microsoft. So you can understand I was very happy to receive the following email:

clip_image001COWABUNGA!

The following Associate ALM Rangers have been moved to the Active ALM Rangers group on http://aka.ms/vsarindex based on FY13/14 achievements:

Wouter de Kort

And there was my name! As of that moment I am an Active ALM Ranger.

So why would you want to be an ALM Ranger?

The Rangers are volunteers who use their own time and resources to work together to produce all the guidance and tooling to help others work with ALM solutions.

Why would you want to spend your own time voluntarily? Well, I can’t speak for others but I think most of the Rangers have the following reasons:

We love helping others. Writing guidance, giving presentations, responding to questions. All those activities and more are very popular with the Rangers.

We love learning new things. When joining the Rangers, you join a large group (at this moment 182!) of enthusiastic and very knowledgeable people.You work together on researching new stuff and creating guidance and tooling. You also work directly with the Visual Studio Product Group which opens up an enormous amount of knowledge. For example, when working with Release Management I ran into a couple of issues. A few quick emails later, I was in direct contact with someone at Microsoft who works on Release Management!

Do you want to be a Ranger?

If you look at the ALM Rangers Flight plan you see that we are currently working on a Windows 8 app, guidance for running TFS on Azure, researching SAFe, Config as Code and a bunch of other projects.

vsarFlightPlan

As you can understand, we can always use help of passionate individuals who love ALM! If you are interested in becoming a Ranger, check out the blog post Understanding the Visual Studio ALM Rangers and leave a comment, reach me at Twitter (@wouterdekort) or send send me an email (wouter.dekort@seizeit.nl).

TechEd Europe Day 4

One of the things I hadn’t done any sessions on where the design principles behind Windows 8 Metro applications. So today I started with a sessions from one of Microsoft Tehcnical Evangelists to get up to speed on it.

Designing Apps With Metro Principles and Windows Personality

The session started with a set of core principles of Metro:

  • Do more with less
  • Pride in craftmanship
  • Fast and fluid
  • Authentical Digital
  • Win as One

What does this mean? Well, a few basic things you need to consider when designing your app.

First, make sure you have a clear goal with your app. You should aim to make your app excel at one specific thing and then design your app in such a away that everything supports that. This means leaving out all the ‘not essential’ things like chrome and navigation.

An examples illustrates this. If you ask a typical group of users to draw a Windows 7 application most or them will draw a start button and some kind of chrome with the typical Windows buttons. Ask them to draw a Windows 8 app and they will suddenly draw the content, not the chrome. And that’s the most important thing in a Windows 8 app: the content.

Doing more with less means that you should remove all chrome and let your app focus on the content. So, for example, a news reader should focus on news items, not on a navigation chrome.

Take pride in your app! Focus on the details and make sure that what you do is excellent, if something can’t be done perfect or doesn’t support your ultimate goal, then leave it out.

Fast and fluid has to do with how your app responses and supports the touch language. One thing I noticed last week is that Microsoft is calling touch a language. You should design your app touch first and then you will automatically support mouse and keyboard.

Embrace the fact that you are on a digital platform. Don’t try to mimic the real world. Users are used to being in a digital world and we can take advantages of that. The semantic zoom is a nice example of that. In the real world, zooming only changes the size of an object, in a digital world we can show a whole different perspective on our data. Show how popular items are by using different sizes for example.

Winning as one is all about integrating with the Windows ecosystem and making sure that the whole is greater than the sum of the parts. We can use contracts for that such as the search and share contract.

Other design elements are for example animations (that are part of the WinRT library so all apps get a consistent feel) to a clear use of typography that will let users automatically focus on what is most important.

This was a great session and I would really advise you to watch it. It gave some tips on doing a complete review of a project. From the process all the way to the code.
One thing that was a pity however, was that the speaker lost a lot of time on showing some videos on how great Australia is and telling some fun facts about the Netherlands. Skipping those would have made the session even better.
I just want to share some notes I toke during this session.

Do you evaluate the whole processes?

What things do you start looking for:
  • Dependencies
  • Commented out code
  • No tests
  • Duplication
  • Long methods
  • Standards
  • Bad naming
  • God objects
  • No exception handling
  • No comments
  • No code reviews
  • No build process
  • No design docs
  • Defensive coding
Are the developers getting bogged down in doing user interface work? If so, they won’t have enough time to focus on good code and you will get problems.
So split User Interface and development work from each other.
Do you have a Scrum master?
Do you have Continues integration?
Do you have continues deployment?
Do you have a  Schema master?
Do you have a TFS master? Does he check policies, comments and builds?

Are you on the latest version of all tools?

Can you do a get latest and compile on a fresh machine? If not then create a instructions to compile document.

Can you run all unit tests? In this phase, all unit tests should succeed and all integration tests should fail.

Can you create the database easily? Do all integration tests succeed now?
Even better is if all this can be setup with a powershell script. 

Then you start looking at the code. Is there a consistent way of naming the solution and projects?
Is there any documentation?

Make sure that your definition of done is reflected in your docs. This way your documentation will stay up to date when running all the SCRUM sprints.

In the new school your documentation should consist of 4 documents:

  • Business
  • Compile
  • Deploy
  • Technologies

Other documentation will consist of your unit tests and work items.

Then you can use Visual Studio Ultimate (or any 3rd party tools) to analyze the architecture. Create dependency graphs and discuss them together with the development team. Analyse those graphs to make sure everything is OK. If not, create a work item for it.

Setup a process of code analysis where you encourage the devs to improve each class, each time they tough it. Don’t let them fix everything in one big run. That’s way to hard. Doing it incrementally and setting clear goals for each sprint will be a much better goal.

At the end of the presentation a link to Rules To Better Architecture and Code Review was mentioned. I just read it and I can recommend it to anyone who’s interested in the subject.

Or if you don’t like reading, there is also a video which describes this process.

Building Your First Windows 8 Metro Style App

So, that’s it for the sessions. At the end of the Friday I did one workshop on building my first Windows 8 Metro style app. 
Microsoft setup a workshop room with a couple of pcs that had a touch monitor! Really cool stuff to play with. In the workshop a Contoso Cookbook application was build that got some data from a webservice and then used a grid layout to let the user navigate it.
We also integrated with both the share and search contract. The helper applications in the SDK are really nice for testing your contracts! 
It’s a nice introduction and Microsoft told us that they will release this workshop as a hands on lab.
And that was TechEd! I will go into more detail on some specific things I learned in my next blogs. If there is something you want to know more about, feel free to leave a comment or send me an email and I will write a blog about it.

TechEd Europe Day 3

Day 3 didn’t have a keynote so we could choose our own sessions. The first session I went to was about Sql Server Data Tools.

I worked with the database projects in VS2010 and I didn’t really liked them. It was hard to keep them in sync and I’ve seen how they where abonded after some time in a project.

SSDT is the successor to the database project and from what I have seen it’s a big improvement! It focuses more on project base development with a nice integration with Visual Studio. The new Sql Server Object Explorer supports developing against a local database, Sql Server and the Azure Sql Service. You can now also use Intellisense, Refactoring and other functions like Go to Definition and Find Al References inside you Sql scripts.

Another nice thing is the development workflow that SSDT supports. You develop against your localdb and then sync your changes easily to a production server. With the new retargetting support you can even check if if your schema is valid against an Azure or Sql Server 2012 instance. This combined with the ability to create snapshots and use version control should definitely make our lives a little easier.


Windows Azure Internals

This session was presented by Mark Russinovich so I wanted to check it out. It was a cool inside look into how the Azure datacenter works and what happens when deploying cloud services and virtual machines.

Azure is currently growing at such a rate that Microsoft is building 6 new datacenters around the world that will be completed somewhere this year.

Most of this session was very technical but since we are geeks ir’s always nice to know how things work 🙂

Surviving public speaking

This was a lunch session by Alex de Jong, also from the Netherlands. It was a very good presentation that focused on giving technical presentations on events like the TechEd.

It gave advice on how to prepare your talk. Keep your audience in mind when designing it and make sure that you are enthusiastic about your subject. When giving the talk make sure you feel at ease. Wear cloths that make you feel comfortable.

He also talked about giving demo’s. Prepare your demo machines with Hyper-v and using snapshots so you can always go to a correct state.

I agree with what Alex said on Powerpoint. Your presentation should not consist of one slide after the other. Less slides, more talking leads to a better presentation.

On a technical point, don’t use animations and all kinds of slide transitions to much if you don’t want to make your audience sick.

This was a session I was really looking forward to. The last couple of months I’ve been doing a lot of JavaScript development. I came to the conclusion that JavaScript can be used for large scale development.

And the session gave me just this! It started with some history on the language and some numbers. Did you know for example that Gmail 2010 contained 443000 lines of JavaScript code, 978000 including comments! That really shows that JavaScript is a powerfull language.

After a quick overview of the language (consisting of only two slides) it became more of a deep dive.

The three most important concepts of the language where discussed namely:

  • Objectss
  • Protototypes 
  • Functions 

There where some nice demo’s all written in the IE10 Developer tools. For example, the property descriptors in ECMA5 or using a prototype for sharing default values between an inheritance chain.

I will start experimenting with the things I learned and post them in a blog soon! 

Compile & Execute Requirements in .NET

This talk was given by David Star. I already heard a lot about him so I was looking forward too seeing him giving a talk. It was a great talk. It discussed how we can transform user requirements into compilable code that can form the basis of our test harness.

Off course most of this is still a glimpse of the future since these tools are still in development. Things we can at least experiment with today are tools like SpecFlow or SubSpec. I would encourage you to have a look them to see where this is going.

Ask the experts

The day ended with Ask The Experts. I didn’t knew what to expect but as it showed Microsoft had prepared a huge room with tables divided by subject. You could take a seat at a table to chat with experts that where assigned to their field of expertise and ask them all kinds of questions. I went to the tables about Visual Studio and ALM. There where some nice discussions about how to incrementally move your company to SCRUM and a continues deployment environment.


TechEd is definitively the place to network!