Modifying a VSIX with Node

In a previous post I showed how to use PowerShell to edit a VSIX file. Because of the cross platform capabilities of the release and build system of Visual Studio Team Services, I ported the code to Node. This allows you to run the script on any of the supported platforms where the Node based agent runs such as Linux and Mac.

To create and test the script I first installed the Node.js tools for Visual Studio. This allows you to create  a Node.js application such as a Console App or a Web app.

image

To create the VSIX editing script, I created a Console Application. With Visual Studio you can then run and debug the script. The final extension has three important files.

package.json

Package.json is the configuration file for the Node.js package manager: NPM. In your package.json file you list the dependencies that are required for your application. You can also list scripts that are then available from the Task Runner (if you install the NPM Task Runner extension). The package.json file for this application looks like this:

{
 "devDependencies": {
 "adm-zip": "^0.4.7",
 "archiver": "^0.21.0",
 "copyfiles": "^0.2.1",
 "mkdirp": "^0.5.1",
 "q": "^1.4.1",
 "rimraf": "^2.5.1",
 "temp": "^0.8.3",
 "tfx-cli": "^0.3.12",
 "tsconfig-glob": "^0.4.0",
 "typescript": "^1.7.5",
 "typings": "^0.6.6",
 "vsts-task-lib": "^0.5.10",
 "x2js": "^2.0.0",
 "xmldom": "^0.1.22"
},
 "scripts": {
 "initdev:npm": "npm install",
 "initdev:typings": "typings install",
 "initdev": "npm run initdev:npm  && npm run initdev:typings",
 "compile": "tsc --project .\\"
},
 "name": "extensiondependencies",
 "private": true,
 "version": "0.0.0"
}

First, there is a list of development dependencies. All these dependencies are downloaded by NPM and put in the node_modules subfolder in your project. The scripts section lists a couple of scripts to initialize NPM and Typings and to compile the TypeScript files. These scripts can be run through the Task Runner Explorer or from the command line.

image

typings.json

The typings.json file describes the TypeScript definition files thar you use in your project:

{
 "dependencies": { },
 "devDependencies": { },
 "ambientDevDependencies": {
 "adm-zip": "github:DefinitelyTyped/DefinitelyTyped/adm-zip/
            adm-zip.d.ts",
 "jasmine": "github:DefinitelyTyped/DefinitelyTyped/jasmine/
            jasmine.d.ts",
 "node": "github:DefinitelyTyped/DefinitelyTyped/node/node.d.ts
            #1c56e368e17bb28ca57577250624ca5bd561aa81",
 "Q": "github:DefinitelyTyped/DefinitelyTyped/q/Q.d.ts
            #aae1368c8ee377f6e9c59c2d6faf1acb3ece7e05",
 "vsts-task-lib": "github:Microsoft/vso-agent-tasks/definitions/
            vsts-task-lib.d.ts#releases/m94",
 "temp": "github:DefinitelyTyped/DefinitelyTyped/temp/temp.d.ts",
 }
}

As you can see, a definition has a name and a location where the defenition file can be found. A lot of definition files can be found in the GitHub repository DefinitelyTyped. The definition file for the vsts-task-lib is located in the Microsoft repository at GitHub. Running typings install from a command line downloads all the definition files and stores them in a typings folder in your project. A main.d.ts file is created which references all your definition files so that you only have to reference one single file from your code.
<image

app.ts

The actual script is contained in the app.ts file. This is the startup file for your Node Console Application. Since it’s a TypeScript file, I created a class named vsixEditor that takes care of the actual work:

/// <reference path="typings/main.d.ts" />
import AdmZip = require("adm-zip")
import temp = require('temp');
import fs = require('fs');
import path = require('path');
import Q = require("q");

class VSIXEditor {
 private zip: AdmZip;
 private outputPath: string;
 private edit: boolean = false;

 private versionNumber: string = null;
 private id: string = null;
 private publisher: string = null;

 constructor(input: string, output: string) {
 this.outputPath = output;
 this.zip = new AdmZip(input);
 }

 public startEdit() {
 if (this.edit) throw "Edit is already started";
 this.edit = true;
 }

 public endEdit() {
 this.validateEditMode();

 if (this.hasEdits()) {
 temp.track();

 temp.mkdir("visxeditor", (err, dirPath) => {
 if (err) throw err;

 this.zip.extractAllTo(dirPath, true);

 this.EditVsixManifest(dirPath)
 .then(() => {
 this.EditVsoManifest(dirPath).then(() => {
 var archiver = require('archiver');
 var output = fs.createWriteStream(this.outputPath);
 var archive = archiver('zip');

 output.on('close', function () {
 console.log(archive.pointer() + ' total bytes');
 console.log('archiver has been finalized and the output file descriptor has closed.');
 });

 archive.on('error', function (err) {
 throw err;
 });

 archive.pipe(output);

 archive.bulk([
 { expand: true, cwd: dirPath, src: ['**/*'] }
 ]);
 archive.finalize();
 });
 });


 });
 }
 }
 private EditVsoManifest(dirPath: string) {
 var deferred = Q.defer<boolean>();

 var vsoManifestPath = path.join(dirPath, 'extension.vsomanifest');
 fs.readFile(vsoManifestPath, 'utf8', (err, vsoManifestData) => {
 if (err) throw err;
 fs.writeFile(vsoManifestPath, vsoManifestData, () => {
 deferred.resolve(true)
 });
 });
 return deferred.promise;
 }

 private EditVsixManifest(dirPath: string) {
 var deferred = Q.defer<boolean>();
 var x2jsLib = require('x2js');
 var x2js = new x2jsLib();

 var vsixManifestPath = path.join(dirPath, 'extension.vsixmanifest');
 fs.readFile(vsixManifestPath, 'utf8', (err, vsixManifestData) => {
 if (err) throw err;

 var vsixmanifest = x2js.xml2js(vsixManifestData);
 var identity = vsixmanifest.PackageManifest.Metadata.Identity;
 if (this.versionNumber) identity._Version = this.versionNumber;
 if (this.id) identity._Id = this.id;
 if (this.publisher) identity._Publisher = this.publisher;

 vsixManifestData = x2js.js2xml(vsixmanifest);

 fs.writeFile(vsixManifestPath, vsixManifestData, () => {
 deferred.resolve(true)
 });
 deferred.resolve(true);
 });


 return deferred.promise;
 }

 private hasEdits(): boolean {
 return this.versionNumber != null
 }
 public EditVersion(version: string) {
 this.validateEditMode();
 this.versionNumber = version;
 }

 public EditId(id: string) {
 this.validateEditMode();
 this.id = id;
 }

 public EditPublisher(publisher: string) {
 this.validateEditMode();
 this.publisher = publisher;
 }

 private validateEditMode() {
 if (!this.edit) throw "Editing is not started";
 }
}

And this is how you would use the code:

var vsixEditor = new VSIXEditor("C:/temp/myvsix.vsix",
 "C:/temp/myvsixoutput.vsix");
 vsixEditor.startEdit();
 vsixEditor.EditVersion("1.0.0");
 vsixEditor.EditId("xxIDxx");
 vsixEditor.EditPublisher("xxPublisherxx");
 vsixEditor.endEdit();

Modifying a VSIX file before publishing

A software delivery pipeline starts with a build. A build creates the artifacts you need. You then use these artifacts to deploy to different environments while only changing the configuration data that’s specific for an environment.

When it comes to Visual Studio Team Services extensions, the artifact that gets created by your build is a VSIX package. A VSIX is a ZIP file that contains metadata on your extension and the actual files required for your extension. There is no official way to edit a VSIX file after it’s been created. But when deploying a VSIX to different environments I want to be able to change things like the Publisher ID and the Version number. This allows me to have a single package and change the configuration data on deployment.

Manipulating a VSIX with PowerShell

Fortunately, the .NET Framework has support for manipulating the files in a ZIP file. The following script takes a VSIX, opens it up and updates values before publishing it to VSTS.

[cmdletbinding()]
param(
 [string] [Parameter(Mandatory=$true)] $PathToVSIX,
 [string] [Parameter(Mandatory=$true)] $Token,
 [string] $IsPublicInput = "false",
 [string] $Version = $null,
 [string] $Publisher = $null,
 [string] $RemoveBaseUriInput = "true",
 [string] $ShareWith= $null
)
Set-StrictMode -Version 3

[bool]$IsPublic = [bool]::Parse($IsPublicInput)
[bool]$RemoveBaseUri = [bool]::Parse($RemoveBaseUriInput)

$file = Get-ChildItem $PathToVSIX -Filter *.vsix -Recurse | % { $_.FullName } | Select -First 1
Write-Verbose "Found VSIX Package $file"

try { $null = [IO.Compression.ZipFile] }
catch { [System.Reflection.Assembly]::LoadWithPartialName('System.IO.Compression.FileSystem') }

try { $fileZip = [System.IO.Compression.ZipFile]::Open( $file, 'Update' ) }
catch { throw "Another process has locked the '$file' file." }

$desiredFile = [System.IO.StreamReader]($fileZip.Entries | Where-Object { $_.FullName -match 'extension.vsixmanifest' }).Open()
$text = $desiredFile.ReadToEnd()
[xml]$xml = $text
$desiredFile.Close()
$desiredFile.Dispose()

if ($Version)
{
 Write-Verbose "Updating Version to $Version"
 $xml.PackageManifest.MetaData.Identity.Version = $Version
}

if ($Publisher)
{
 Write-Verbose "Updating Publisher to $Publisher"
 $xml.PackageManifest.MetaData.Identity.Publisher = $Publisher 
}

if($IsPublic -eq $true)
{
 Write-Verbose "Setting GalleryFlag to Public"
 $xml.PackageManifest.MetaData.GalleryFlags = "Public"
}
else
{
 Write-Verbose "Setting GalleryFlag to Private"
 $xml.PackageManifest.MetaData.GalleryFlags = ""
}

$desiredFile = [System.IO.StreamWriter]($fileZip.Entries | Where-Object { $_.FullName -match 'extension.vsixmanifest' }).Open()

$desiredFile.BaseStream.SetLength(0)
$desiredFile.Write($xml.InnerXml)
$desiredFile.Flush()
$desiredFile.Close()

$desiredFile = [System.IO.StreamReader]($fileZip.Entries | Where-Object { $_.FullName -match 'extension.vsomanifest' }).Open()
$text = $desiredFile.ReadToEnd()
$desiredFile.Close()
$desiredFile.Dispose()

if ($RemoveBaseUri -eq $true)
{
 $text = (($text -split "`n") | ? {$_ -notmatch 'baseUri'}) -join "`n"
}

$desiredFile = [System.IO.StreamWriter]($fileZip.Entries | Where-Object { $_.FullName -match 'extension.vsomanifest' }).Open()

$desiredFile.BaseStream.SetLength(0)
$desiredFile.Write($text)
$desiredFile.Flush()
$desiredFile.Close()

$fileZip.Dispose()

if($ShareWith -ne $null)
{
 $ShareWith = "--share-with" + $ShareWith
}
else
{
 $ShareWith = ""
}

npm install -g tfx-cli
tfx extension publish --vsix "$File" --token $Token $ShareWith

You can use this PowerShell script and execute it as a step in your release definition to configure and publish your extension.

New Visual Studio Team Services Dashboard Widget

Microsoft released new capabilities for building your own dashboard widgets. The Widget SDK is now in public preview. One of the advantages of being an ALM Ranger is that we get early access to these new features to help the team test them and deliver guidance.

Together with Mathias Olausen I’ve been working on a widget that shows a countdown on your dashboard. You can configure things like the front and background color, title and of course the date that you want to count down to. We also created a second widget that automatically retrieves the end of your current iteration and uses that for the countdown.

On the official ALM Rangers blog we’ve published an article that describes some key points of the widget and how we’ve build it.

  1. Getting started with Widgets
  2. Developing the Widget
  3. Issues and resolving them
  4. Publishing Widgets

So please go and install our new widget and let us know what you think!

DevOps on the Microsoft Stack – Pre-order now

The last couple of months I’ve been bussy working on my new book: DevOps on the Microsoft Stack

DevOps on the Microsoft Stack Book Cover 3D

DevOps is a popular subject and Microsoft has a very good tool suite in the form of Visual Studio, Visual Studio Team Services, Team Foundation Server and Microsoft Azure. This books takes you on a tour through these tools.

You’ll learn about a host of features like:

  • Agile Project Management
  • Version Control with TFVC and Git
  • Technical Debt Managament
  • Package Management
  • Continuous Integration and Continuous Delivery
  • Testing and test automation
  • Monitoring

Pre-order now

Writing is almost finished and you can pre-order the book on Amazon or directly from Apress. If you order a copy, please let me know what you’re looking forward to! You can reach me through the comments on this blog or by sending me a tweet (@wouterdekort).

My first Visual Studio Team Services Extension is live!

One of the many privileges of being an ALM Ranger is that you get the chance to participate in private preview programs where you work closely with the product group.
The latest opportunity was around the new Visual Studio Online Extension model. The new Extension model allows you to create your own extensions that plug directly into the VS Team Services web access.
I started out with a simple extension: Folder Management. This extension allows you to create new folders directly from Web Access in both TFVC and Git repositories.
And I’m very pleased to announce that my new extension is now live! The code is at GitHub and will be completely open source from now on. There is also a blog post at the ALM Rangers account with some more details.
If you want to install the extension on your account, please join the Visual Studio Industry Partner Program. This is completely free at the basic level and will give you access to the new Extension model.
Questions? Feedback? Please leave a comment!

Slides from my TechDays 2015 sessions

Last week we had the TechDays here in the Netherlands. I was asked to deliver two sessions and two Ask me Anything sessions. A lot of attendees asked me if they could find my slides somewhere.

For those of you, here they are:

The two Ask me Anything sessions where recorded and will hopefully be available on Channel 9 later this week (I’ll post the links when they’re online). I love the concept of the AMA sessions and I hope to see them again next year.

If you have any questions on my sessions or would love to have them delivered at your place sometime (in person or online) just leave a comment!

Azure Dev/Test labs preview is getting started

Today I woke up to an email inviting me to the new Azure Dev/Test Lab preview program. The preview program can start any moment now!

I’m really excited about this new feature. Especially when it comes to Application Lifecycle Management and Azure, Dev/Test is something a lot of my customers are interested in.

While working in the preview program, you get to directly interact with the product group and the team who’s building the new features. This means you learn from the people who really know it and you have a change to influence the direction of the product!

If you want to know more about Dev/Test labs checkout my previous blog post or Claude Remillards talk on Build (at 18:30). And of course you can also apply for the preview program.

Questions? Anything you would love to see in the Dev/Test labs? Please leave a comment!

Have you seen Build vNext?

Build is an important part of Application Lifecycle Management. Every DevOps pipeline starts with a Build. Visual Studio Online and Team Foundation Server use the XAML based build system for some time now. I’ve used this system in a lot of scenarios, ranging from small applications to complex scenarios with a multitude of customized builds working together.

But now that I’ve seen the new build system and had some time to play with it I’m completely sold. In this post I want to give you a quick intro to Build vNext and show you what it’s all about.

Why a new Build system?

The old build system is based on Windows Workflow Foundation. Microsoft created a set of components that you can use to create build templates. You can also create your own components (such as the Code Metrics I blogged about) and add these to your templates. A template could become very long. The following image shows you the outline of the default template that ships with TFS 2013.

An outline for the Default XAML build template

These Windows Workflow Foundation builds are run by what’s called a Build Controller and a Build Agent. The Agent does the actual work, the Controller manages one or more Agents and distributes the work to them. The Controller and Agent can only run on Windows. This means that cross platform builds, for example build an Xamarin app on Linux and Mac, are not possible out of the box.

Another limitation is that a Controller is linked to one, and only one, Team Project Collection. This means that sharing a build infrastructure between different collections is impossible. This required careful planning and sometimes resulted in customers having multiple build controllers without any real need for it.

Builds are configured by a build definition that uses a specific build template. These build definitions contain all the settings for your build such as which code to build, which tests to run and what do to with the results. The build definitions are not stored in version control. This means there is no audit trail and no way to go back to a previous version.

That’s enough info on the old build system. Let’s look at how the new Build system solves all those
issues.

Introducing Build vNext

While working on this post, Build vNext is now available in public preview on Visual Studio Online. If you go to your VSO account (and if you don’t have one, please create one! It’s free) and select a project you will see a new tab in your menu: Build Preview. This Build System will also be the default build system that’s shipped with Team Foundation Server 2015. Of course the XAML builds will be there for backwards compatibility but whenever possible, the new build system is recommended.

Configuring everything that has to do with your build definitions is now done from Web Access. This means you don’t have to use Visual Studio anymore for configuring builds, immediately making the build system configuration cross platform. Everyone with a browser can create a build.

Builds are no longer based on complex build templates. Instead, a build is simply a list of tasks. Tasks can be anything from running a Visual Studio build, executing a set of tests, calling a PowerShell script or even a custom task you create yourself. In the old build system, creating build tasks required a far amount of knowledge (if you look at tfsbuildextension.codeplex.com you find some extensions created by the community). Fortunately, the new build system is very extensible and much easier then the old one.

A list of tasks that form a new build

Build tasks are essentially a JSON file shipped with other required files (like a PowerShell script). The JSON file describes your tasks with things like a name and description. It defines the  parameters that are required to configure the task and finally it contains the actual logic that needs to run when the task is executed. And best of all, there are plenty of examples already! With Microsofts new open-source focus, it shouldn’t surprise you that the build tasks are developed in the open at Github: https://github.com/Microsoft/vso-agent-tasks.

So, a build is a list of tasks. In addition to the tasks and their configuration, you can set numerous other properties on a build definition such as variables that you want to share between tasks, retention policies (meaning how much finished build results need to be kept around) and the triggers that should start a build. If you look at the next figure, you will immediately see a difference with the old build definition: there are checkboxes instead of radio buttons. This means you can now have a single definition with multiple triggers. Previously, you would have to clone your build definition and configure a different trigger each time to achieve this.

Selecting multiple triggers for a build

Build definitions now also keep track of their history. Any change you make will be saved as a new version. You can view the history of a build definition and compare different versions with each other to quickly figure out what has changed.

A diff showing the changes made to a build definition

There are also changes in the way a build runs. Web Access will give you a live view of the output a build produces. This output is grouped in steps that you can easily navigate and is stored with the build result.

And of course there is something that has to run a build. This is done by the new build agent. A cross-platform, Node.js based runner that can be installed with a simple script on Linux, Mac and Windows. This way. Microsoft has taken the build system completely cross platform, allowing you to have a farm of different build machines that can all be used together.

How does the build system figure out which agent to use? This is done by setting Capabilities on the agent and Demands on the build. Capabilities are automatically discovered by an agent. So installing Visual Studio on a build machine will let the agent know that it can run Visual Studio Builds. This removes the need to manually tag build agents and then add those tags to your build definitions to make sure that a build ends up on the correct machine (if required you can add custom capabilities and demands). There is no concept of a Controller linked to a Team Project Collection anymore. So agents can finally be shared across collections! Does this mean there is no way to restrict agents or have some form of prioritization?

Fortunately, there is. Microsoft introduced the new idea of Pools and Queues. A Pool is defined at the level of your VSO account or TFS Application Tier. Pools can be shared across Team Project Collections. They are used to define permission boundaries. Queues are scoped to the Team Project level and belong to a Pool. This way, a build can be assigned to a queue and then run on Agent.

Configuring Queues and Pools

How can I start?!

Since Build vNext is in public preview on Visual Studio Online, the easiest way to get started is by using your VSO account. Here are some resources to get you started:

And that’s it for now! I encourage you to start using the new build system and if you have any questions or comments, you know where to find me!

Things to do after Build

Do you know about the phony feeling when being a developer? That feeling that you actually don’t know anything and that you’re only pretending? Scott Hanselman wrote a post on this ’Exploring imposter syndrome in technology’ and apparently the feeling is not uncommon.

And I suffer from it. Especially after visiting Build! There are so many great things discussed and new features released that I left Build with the feeling that I should really study up on some stuff.

So without further ado!

My list of things do after Build

And finally: Get a Hololens! But that one is going have to wait a bit. I think after doing all those things I’m a little bit less of a phony.

What is on your todo list? Anything I’ve missed or that you would love to learn? Please leave a comment!

Have you seen the new dev/test labs on Azure?

One of the great scenarios for Azure is dev/test. Having the ability to quickly setup an environment in Azure and run your automatic or manual tests on them gives you an extreme flexibility. You pay per minute and you can have as many machines as you want in a couple of minutes.

But there are some challenges.

One thing I notice while working with customers is that they often want to limit how much money can be spent on Azure. Often customers asks for a cost calculation so they know beforehand how much Azure is going to cost them. Of course the whole idea of Azure is that you pay for what you use so giving of an exact amount is hard.

Another thing that teams sometimes struggle with is the amount of choice on Azure. You have different sizes of virtual machines, ranging from simple A-series to Godzilla machines. And add to that the number of virtyual machine images you can use, network settings and security configurations and a team can suddenly be overwhelmed with all the options they have.

Dev/Tests labs to the rescue!

During Build this week Claude Remillard announced a new Azure service: Dev/Test labs. You can now create one or multiple labs in your subscriptions. These labs can have a quota on how much can be spent. This means that a team can never go over the amount you configure, giving managers and others a nice and secure feeling.

But that’s not all. You can also configure which virtual machine sizes can be used, which templates are accessible and you can easily create custom base images for your team. The developer division at Microsoft uses this themselves. They have base images that contain stuff like the correct Windows version, Office and other standard tooling. Then each night they run a scheduled build that takes a base image, adds the newest version of Visual Studio to it and create a new image ready to go the next day. You can configure your lab to automatically shut down VMs at the end of the day to save on costs.

Now integrate this with the new Release Management service (a topic for another blog!) and you have a great new tool to automatically setup and break down environments on Azure.

Want to know more?

On Channel 9 you can find the recording of the session with demos showing the new lab features: http://channel9.msdn.com/Events/Build/2015/3-721. Currently this feature is in private preview but you can request to join: http://azure.microsoft.com/en-us/campaigns/devtest-lab/

I’ve requested to join the private preview. As soon as I’m up and running I’ll post more details on this promising new feature.

What do you think? Is this useful? Anything you like to see? Please leave a comment!