Alex Speaking User Group - Community Event

During a Community Engagement in Oradea

Given my less-frequent posts this year, you probably realized by now that it has been quite a busy ride for me and the team. We’re just about to launch two new products (stay tuned!), signed a few sweet deals and kept the great flow going – I’ll keep you updated as I can disclose more stuff; right now, everything is very hush-hush. And yet, I’ve quite the busiest agenda for the upcoming months, community-wise, ever!

Here are some of the public conferences, meetings and user groups which I will attend as a speaker. If you happen to attend any of these, make sure you drop by and say ‘Hi!’. I’d love to chat with you over anything related to Azure, Microsoft, Xamarin, SQL or whatever your favorite topic might be.

Community Engamgement Cvasi-calendar

  • March 25th, DevExperience Iasi – I’ll talk about ‘High Scalability and Security for Web, Mobile and API Apps‘ running in Azure and was also invited to be part of a discussion panel about Microservices – guess which Fabric I’ll get to talk about (hint: it start with Service…)
  • April 16th, Global Azure Bootcamp Oradea – yet again, Oradea is part of this fabulous global community event and this time it’s going to be bigger and better than ever before. I’m hosting the event in a new location with fellow MVPs who’ll attend as speakers and given this year’s topic, it will be a blast! Stay tuned, if you happen to be in Oradea on April 16th! (link coming up…)
  • April 23rd, CodeCamp Iasi – the gang in Iasi is doing it again: 1200+ developers will most likely show up to one of the largest community-driven events for software developers in Romania. My session at CodeCamp Iasi will be about Cloud Patterns and Best-Practices. And if you think I could talk the entire day about this, think again: I could keep on going for months, honestly. Instead, I only have 50 minutes available to cover the best of cloud architecture and thus this session promises to be a lot of fun!
  • April 26th-29th, WinDays – hosted by Microsoft Croatia in the beautiful country of… you guessed it: Croatia! I will deliver not one but two sessions at WinDays this year, both related to cloud topics: ‘SQL Database from a Developer’s Perspective‘ and my favourite ‘Know Your Customers, Know Yours Apps!
  • May 7th, CodeCamp Cluj-Napoca – basically more or less CodeCamp Iasi, but on the other side of the country, for an entirely different group of talented software developers
  • May 16th-18th, Microsoft NT Konferenca, in the wonderful country of Slovenia. Again, this is a conference I’m very excited about, especially since my last trip to Slovenia which left me utterly surprised – Slovenia is a great country and exceeded my expectations in absolutely every aspect you could imagine. Here are some (huge) numbers about Microsoft NT Konferenca: 3 days long, 1700+ participants, 10+ one day, trainings, 150+ speaker and a tradition of 21 (yes, 21!) years. You could almost think of this as a mini-Build, European-style. Consider that the event is hosted at the sea-side in a luxurious hotel which also offers a with pool warmed salty water – did you start to search for a plane ticket already? At Microsoft NT Konferenca, I’ll talk about ‘Using Azure for Dev-Test Environments’ and about ‘Knowing Your Customers, Knowing Your Apps’
  • I’ve saved the best for last: May 26th-27th, ITCamp Cluj-Napoca. This is the kind of conference which doesn’t need an introduction any more. This year however, given the list of speakers and their background, you’ll get the chance to learn a lot (and I mean A LOT) about security. The agenda is still being put together, so I don’t want to spoil any surprises the organizers are preparing…

So, there’s a lot of community-stuff going on across Europe. And as I happen to be part of some of these events, either as an organizers or as a speaker, it’s my pleasure to declare the conferencing-season OPEN!

See you there 😉

This post describes the latest Team Build updates with features available both in Team Foundation Server (TFS) 2015 Update 2 RC1 and Visual Studio Team Services (VSTS) and has been posted in the Azure Development Community blog on https://blogs.msdn.microsoft.com/azuredev/.

build

‘Team Build is Dead! Long Live Team Build!’

This was one of the main titles of last year’s Ignite conference when the latest version of Team Build was introduced and there a simple reason behind it – more specifically, the new Team Build system is a complete re-write of the former Team Build. One of the first results in this re-write is that there no longer is any reason to raise the shoulders when questions such as “I love TFS, but why can’t I use it to build my Android projects?” are asked. As it turns out, the latest build of Team Build allows for more extensibility than ever, easier management over the web portal and much easier build agent deployment – throughout this post I will try to cover as much as possible in terms of the new available features.

What’s new?

Ever opened a XAML Build Definition Before? Yikes!

Even though the entire workflow-based schema of a build definition prior to TFS 2015 was cool as it allowed a lot of complexity in the entire logic of an automated build, it turned out that due to the lack of extensibility and difficulty of understanding the underlying XML schema, build definitions needed another approach. This is probably one of the main reasons behind the decision of ditching XAML altogether from the new Team Build system. Don’t get me wrong – XAML-based build definitions didn’t go anywhere: you can still create XAML-based build definitions both in TFS and VSTS, but as the team has put it, they will become obsolete at some point in time and therefore, it’s best to put up a strategy of migrating from the XAML-build definitions to the new Team Build task-based system. And to be fair, the new system also comes along with tons of benefits, extensibility being one of the greatest one (at least in my opinion).

Read More →

 HTTPS

HTTPS FTW

Just wanted to give you the heads up that the alexmang.com is now backed by a publicly signed CA certificate. This should make your experience on the website safer and should no longer raise any worries whenever you leave a comment, feedback and so on. Try it out now: https://alexmang.com. Oh, btw: if you’re wondering, it works both on alexmang.com and www.alexmang.com.

Besides that, Ed Price from Microsoft also published my TechWiki Ninja Interview today on blogs.msdn.com (more specifically, here).

Best wishes for 2016!

Taking a look back at 2015, so many things happened. Thinking about the scientifically advancements, I cannot NOT think of finding water on Mars, the power grid solution proposed by Elon Musk for solar-powered houses, the over-the-air software update which makes Teslas drive themselves on the highway, the Surface Book, the HoloLens and so many others. Wow! It’s a great time to be an Engineer!

Considering some had as New Year’s resolution last year to read a book every week, others one every day, it’s my opinion that the best thing to wish you is enough strength to read all the great books in the world, wisdom and knowledge to share by writing a book every day and passion to keep your interest growing.

Happy New Year!

MVP AnnoucementSuch a great start: MVP!

Also as a pleasant and extremely fulfilling surprise came my MVP award announcement today! So it’s official, starting today I am an Azure MVP.

For those of you not familiar with the acronym, MVP stands for Most Valuable Professional and is an annual award offered by Microsoft, ‘that recognizes exceptional technology community leaders worldwide who actively share their high quality, real world expertise with users and Microsoft‘ (recognition letter signed by Steven Guggenheimer).

There are roughly 4,000 awardees worldwide which makes me extremely honored to be part of such a select group of people. Romania-wise, there are roughly 20 MVPs with varying specialties, such as Visual C++, Virtual Machine, ASP.NET, SQL Server, System Center, Windows Embedded, Windows Consumer and others, all part of the Microsoft ecosystem. Local community-wise, I happen to be the first MVP ever located in Oradea – cool! Along with three other Romanian MVPs, my title contains the biggest word right now at Microsoft, namely “Azure“: Ioan-Alexandru Mang, Most Valuable Professional, Microsoft Azure

MVPs also receive early access to technology through a variety of programs offered by Microsoft, which keeps them on the cutting edge of the software and hardware industry.

There are lots I have to thank for such a great recognition and I especially have to praise my family and friends for their ongoing support and contribution. Additionally, the title couldn’t have happened without all the great audiences I got to all my sessions throughout Romania and Europe altogether, whether at ITCamp, Microsoft Summit, CloudBrew, CodeCamp, DevSum, Coding Serbia and all the cool user groups I got to be invited at over time.

My engagements also got me many opportunities to meet great fellow technical experts, such as Mihai Tataran, Tudor Damian, Adrian Stoian, Radu Vunvulea, Gabriel EneaTiberiu Covaci, Andrei Ignat, Cosmin Tataru, Ionut Balan – to name a few -, all Romanian MVPs themselves, but also so very talented American, Belgian and Swedish experts, such as David Giard, Maarten Balliauw, Mike Martin, Kris van der Mast, Yves Goeleven, Tom Kerkhove, Kristof Rennen, Magnus Mårtensson, Chris Klug and Peter Örneholm – again, to name a few!

So, here’s to another great year of technical content, advancements and community engagements!

Cheers!

Speaking at Microsoft Summit & CodeCamp Iasi

Last week was one full of traveling experiences and speaking engagements at the two largest IT conferences in Romania: Microsoft Summit and CodeCamp Iasi. I got a chance to talk on the same subject at these conferences, namely Microsoft Azure Visual Studio Online Application Insights (this name is so Microsoft :-)) and according to Andrei Ignat‘s (Microsoft Visual Studio and Development Technologies MVP) review here, I made a good job delivering this session.

Whilst this year’s Microsoft Summit focused a lot on networking, with lots of great opportunities to meet and chat with brave entrepreneurs, successful business all-stars, experienced technical fellows and gizmo master minds, CodeCamp was a developer hardcore event, with not two, not three, but ten (10!) simultaneous developer tracks. Why such a big number, you might ask? Well, considering that there were at least 1.800 attendees at the event, you can imagine why :-). Don’t get me wrong, Microsoft Summit wasn’t any smaller either, especially in terms of attendees. Rumor has it that over 2.100 attendees registered, but the exact number wasn’t made public yet.

However, the absolutely amazing thing about my Application Insights sessions this year at these two conferences was the fact that some developers who attended my session in Bucharest (at Microsoft Summit), decided to show up again (two days later) at the exact same session in Iasi (at CodeCamp), in order to get additional questions answered and take extra notes on the Application Insights usage scenarios.

This is not only overwhelming, but also extremely flattering! For those of you who attended any of my sessions: you were a great audience: THANK YOU!

For those of you who didn’t make it to either of these sessions, I’ve posted the slides further down this blog post. Be aware though that more than half of the session time was spent on demos and How-Tos rather than just slides – the recordings are yet to be announced by the Microsoft Summit organizers; as soon as they’re public, I’ll make them available on alexmang.com as well.

Also Speaking At CloudBrew Later This Month

CloudBrew AZUG

In addition, if you happen to be in Belgium by the end of the month (November 28th), make sure you register for Cloud Brew – at CloudBrew I’ll focus my Application Insights session of IoT monitoring techniques and some other goodies.

Happy coding!

 

Public Events! Register Today!

Public Speaking @ Microsoft Summit 2015

As part of my continious community involvment, for the next 30 days or so I will be busy traveling yet again across all of Romania (South, North-East and then West again) and Belgium. As you might already expect, I’m engaged on a few public events. If you want to drop by and say ‘Hi’, I’d be more than happy:

Unlike the events taking place in Romania – which are considerably large (1500+ participants) – CloudBrew is a very intimate event, with nothing but excellent talks, valuable networking opportunities, beer sampling (that’s why it’s called CloudBrew), excellent food and wonderful prizes. Both CodeCamp and CloudBrew are community driven events (organized by CodeCamp and AZUG (Azure User Group) Belgium respectively), but if you’re especially interested in cloud computing, than CloudBrew is with no doubt the event for you!

At these events I’m yet again going to cover Azure related content. This time however, I’ll go deep into the service called Visual Studio Online Application Insights, show you tips and tricks on various patterns, show you how you could use Application Insights for any Internet of Things projects and how to customize dashboards so they fit your DevOps team’s  requirements. Lastly, you’ll also get a chance to see a complete IoT application running on a Raspberry PI 2 powered by Windows 10 IoT, monitored using Application Insights – #CoolStuffAlert.

To get a sneak preview of what I’m putting together for these events, you still have a chance to watch the recording of my ITCamp session in May here:

ITCamp 2015 – Application Insights for Any App Must-Have Tool for Understanding Your Customers (Alex Mang) from ITCamp on Vimeo.

…or the recording David Giard and I did before my session there:

In addition to what you get from the roughly 50 minute video, please be advised that I’ve updated the presentation so that it’s up-to-date with the features which were added in the meantime – yes, Visual Studio Online is packed with lots of cool features for monitoring usage and application performance.

See you there!

sherlock

Thou shall not fail with unhandled exception!

As a software developer, whether you have one or a million applications deployed in production, getting the answer to ‘How’s it going, app?’ is priceless.

In terms of web applications, when it comes to diagnostics there are two types of telemetry data you can get and use in either forensic operations or maintenance operations. More specifically, the hosting infrastructure itself has its own set of telemetry data (1) generated from the running application – this is commonly called site diagnostic logs as they are usually generated by the hosting infrastructure; site diagnostic logs have input form the operating system as well as from the web server, so that is Windows and IIS if you’re still using the common hosting  method. In terms of Azure Web Apps, these are generated on your behalf by the hosting infrastructure and can be accessible in a number of ways – but there’s some configuration required first. As for the second telemetry data type, this is the so-called application log which is generated by the application as a result of explicit logging code specified in code, as Debug.WriteLine() or Trace.TraceError().

This general rule however doesn’t fully explain though why when in the Azure portal there’s a larger number of settings for log files and what these settings represent. For quite a long time now, in both the Generally Available Azure portal (manage.windowsazure.com) and in the preview portal (a.k.a. Ibiza – portal.azure.com), there’s always a configuration for diagnostics. Within the portals, there are (by the time of this writing) four different settings which have an On-Off toggle switch, meaning that you can either set that set of telemetry data to be collected or not. If you’re wondering why this is the case, please hear this: writing files over any storage technology and over the Ethernet wire especially will take time and will eventually increase IO load.

Storing logs

Within the Preview Azure Portal (a.k.a Ibiza) settings blade for Web Apps, the four settings for diagnostics are (picture below):

Diagnostic Logs

  1. Application Logging (Filesystem) – these logs represent the logs written explicitly by the application itself by the use of Traces of Debugs (Trace.* and Debug.* respectively). Of course, the methods available in the Debug class are only going to work when the application has been compiled in a debug environment setting. This setting also requires you to specify what the logging level should be stored and you can choose between Error, Warning, Information or Verbose. Each of these levels will include the logs contained within the previous log level – I’ve attached a representative pyramid below. So for example, if you only want to export the error logs generated by your application, you set the level to Error and you will only get these logs – but if you configure the level to warning, you’ll get both warnings and error logs. Pay attention though, as Verbose isn’t proofless to the debug environment symbol – it will still only store debug output lines only if the application has been built with the DEBUG symbol.
error levels
  1. Web server logging – Once configured, will make the environment store the IIS logs generated by the web server on which the web application runs. These are very useful especially when you try to debug crashes or poor performance issues, as these contain information such as the HTTP header sent the client (requestee), his IP address and other useful data. Another priceless information especially when you don’t know why your application runs slow is request time, which specifies how long it took the web server to process a particular request. Properly visualized, these can change the decisions you’re taking in terms of optimization dramatically
  2. Detailed error messages – Here’s where things get a lot more interesting, as detailed error messages are HTML files generated by the web server directly for all the requests which turned out to result in an error, based on the HTTP status code. So in other words, if a particular request results in an HTTP status code in the form of 4xx or 5xx, the environment will store an HTML file containing both the request information (with lots of details) and possible solutions.
  3. Failed request tracing – With Failed request tracing, the environment will create XML files which contain a deeper level of information for failed requests. In terms of IIS, you might already know that each request goes through a number of HTTP modules that you either install via GAC or specify in the system.web node in the web.config file. In terms of ASP.NET 5, things change a lot as modules can be added programmatically into code, as you can self-host the entire environment. Anyway, the XMLs generated will contain information about each HTTP module that was invoked whilst processing the request along with information as how long it took for each module to process the request, messages out of the traces written by that module and much more.

As cool as it is to get so much data out of Azure Web Apps simply for forensic purposes, there are at least two huge drawbacks which come by default:

  1. All logs are (currently) saved by default… locally. This basically means that whenever the Fabric will decide to swap your app to a different hosting environment, you will loose all your diagnostic data – as will happen if for whatever reason the machine reboots or such. In addition, remember the stateless emphasis I (and everyone else) insisted on during any presentations, workshops and such, given so far? Well, that’s because in a clustered environment one never gets the promise that each and every request will ever go to the same actual target. Therefore, you might find yourself that clients continuously requesting your apps will generate logs on multiple machines, which makes forensic operations difficult
  2. The previous point can however be solved by exporting the log data to Azure Storage. The bad news though is that as extensive the Web App blade (and everything that’s related to Web Apps) is, it lacks the option of configuring the Azure Storage account the logs should be exported to – therefore, you have to swap between the old (still, generally available) portal – https://manage.windowsazure.com – and the new portal – https://portal.azure.com. This will most likely be solved by the Web App team in Redmond in the upcoming future. Just as a sidenote, that is EXACLTY what the word filesystem means in the Application Logging toggle switch, mentioned earlier. In order to make the change, simply open up the website in the management portal, go to the CONFIGURE tab and scroll down to the site diagnostics section. In addition, there’s an extra configuration section which allows you to explicitly configure application logs to go to the file system, Azure Storage Table and/or Azure Storage Blogs and, even better, allows you to configure which log level to be stored in each of these containers. Remember that this is also the place where you can change the default 35 MB storage capacity limit either up to 100MB, or as low as 25MB. Just as a side note, keep in mind that in terms of Azure Storage, the limit is determined by the limitations Azure Storage has, so that you can easily break the 100MB limit free.

Reading logs

Using File Transfer Protocol (FTP)

Storing is just one part of the story – the real deal is about consuming the data. Happily enough, accessing the log data is easy enough even from within the Preview Azure Portal – there’s a set of two settings in the Essentials group which give you access to the file system via File Transfer Protocol. As you can imagine, this is protected by a username and password dictionary. The host name and the username a sent in clear text and available right from within the Essentials group on the Web App’s main blade. The password however, which matches the deployment password, is only available from the .PublishSettings file which in turn can be downloaded by clicking the Get PublishSettings icon on the blade’s toolbar.

Once you connect to the hosting environment via FTP, drill down into the File System until you reach the LogFiles folder (located in the root, actually) – this is the place where application and site diagnostics logs are stored.

Using Visual Studio

As a developer, Visual Studio is the #1 most used tool on my PC, and it’s rarely used for DevOps or IT-Pro related tasks. This however, even if it might fall into the latter categories, can be done via Visual Studio too.

In either Visual Studio 2015 or Visual Studio 2015, there are two windows which relate to Azure, one being the legacy Server Explorer window and the other the Cloud Explorer window. Whilst Cloud Explorer is the new guy in town, it offers (in terms of accessing log files) the same functionalities as Server Explorer, the mature sibling; that is, the ability of drilling through the file system of a web app’s hosting environment and show the Log Files folder, with all of its subfolders and files. These can also be read inside Visual Studio so there’s no Alt+tab-ing between windows. Cool enough is that VS also allows you do download the log files (one, multiple or all) locally, for further analysis, machine learning, PowerBI – whatever.

Third party tools

Going into too much details on the fact that third party tools which let you access a web app’s setting, file system etc. exist is pointless – please be reminded that they exist and let’s move on :-).

Azure Web Site Logs Browser

Here’s yet again the place where things get interesting, as there’s a web app extension which allows you to do exactly ONE thing, once installed – that is to view logs. The cool thing about it though is that it will create an HTTP endpoint within Kudu (that is, http://[appname].scm.azurewebsites.net/websitelogs, which you can open up via your favorite web browser; from there, you’ll get exactly the same Log Files folder listing you’ve seen earlier. This makes things a lot more easier, ease there’s no need to work with too many tools if you’re in a quick search for a specific log/log file.

Log Streaming

In this post, I’ve kept the sweets for last. Reading logs is an obvious task you have to do if you want to diagnose performance issues or failures; in my opinion however, it couldn’t get any more passive than that. However, how do you deal scenarios when you’re being told that things go wrong but cannot reproduce them by yourself? What if you could remotely see how your customers’ requests are causing the system to fail or the application to simply return unexpected error messages? Meet log streaming, a near-real time streaming service provided by Azure.

The idea behind streaming service is that, considering you have logs enabled, the system will start streaming logs which can be retriever either by Visual Studio, PowerShell cmdlets or the Ibiza portal directly.

Conclusion

It’s my opinion that the diagnostics services offered by Azure, especially in terms of Web Apps are incredibly thorough and mature enough for any production workload – it’s just a matter of getting the right configuration without impacting performance and afterwards making use of the data generated by the requests your application processes.

Happy coding!

-Alex

webhooks and asp.netBy now you’ve probably heard that ASP.NET now supports WebHooks and not only does it support them, but it goes along with them quite well.

Disclaimer: If you’ve read my posts before, you probably know by now that I’m not a trumpet-kind of guy to promote things which were already decently promoted by team members, company blogs and other community leaders. More specifically, the announcement for WebHooks support was already made by Scott Guthrie, Scott Hanselman,  and others. If you missed any of these, please go ahead and check them out first.

The announcement regarding ASP.NET WebHooks support has been well covered for the last month or so. So rather than go through the announcement, I wanted to detail the process of sending WebHooks. Before you read on any more, please make sure you read this blog post from Henrik F Nielsen on ‘Sending WebHooks with ASP.NET’ – the article is very thorough and well written, but lacks explaining a few things if you’re new to WebHooks.

Basics

If you’re familiar with WebHooks, skip to Receiving WebHooks. Otherwise, happy reading.

The whole concept of WebHooks isn’t that new anyway, since it’s only a further standardization wanna-be of autonomous requests going back and forth between autonomous web servers, by calling specific REST endpoints. When I say standardization wanna-be, I mean that the request which gets sent out the the target endpoint will have a specific request body format – within a JSON object in a specific format, as defined by the convention of independent groups working to define the guidelines which will eventually evolve into standards. So the sender is going to specify a few things, such as the reason for why it ended up sending the request in the first place (this is called an event). In other words, WebHooks is nothing else than a convention on what the request body of an automated HTTP request should look like in order for it to get along with other external web services. This link from pbworks.com explains it in more detail.

Taking a closer look at the various services which claim support for WebHooks, such as SendGrid, PayPal, MailChimp, GitHub, Salesforce etc., you come to understand that whenever you, as a user, configure a particular service’s WebHook implementation, you get to a part where you usually put in a URL and possibly select a list of given events which might force that URL to be hit by a new request. If you’ll go over more services’ configuration webpages for WebHooks, you’ll realize that this configuration part is fairly common to all and thus becomes a pattern.

Receiving WebHooks

Until recently, the difficult part was developing your service in such a manner that it understands WebHooks. That was simply so because developing the next GitHub or PayPal service over night, so that user eventually use it to get WebHooks generated requests for their own web services was… well, let’s face it – unrealistic. Therefore, most articles on-line cover the part of receiving WebHooks and never forget to praise the ASP.NET teams in Redmond for the terrific work they did – they totally deserve it.

Sending WebHooks

However, what if you DO develop the next PayPal? Or maybe simply a number of independent services you want to work and sporadically communicate with each other, on an event-based manner?

Well, on one hand, considering that you want WebHooks to be sent out, you have to remember that WebHooks is in the end a fancy name for an HTTP requests which contains a special body request format. Therefore, I’d a no-brainer that you could instantiate an object of type HttpClient or WebClient and have the request issued accordingly. But still, remember that if your services are going to be used by external customers, they’ll eventually desire these requests to go to their own services as well. In other words, your services should be able to issue request on an event-based manner to a multitude of HTTP endpoints, based on a series of configurations: which actions, trigger which events and run requests at which URLs.

More specifically, consider that you develop the next most popular on-line invoicing SaaS API. Since you’re following the best practices for web services, you’ll most likely not generate the invoice and send it to an e-mail address in the web server code-behind code, would you? Instead, you probably architect some sort of n-tier application type where your front-facing web application get any invoice generation requests, respond back with a ‘promise’ that the invoice will be generated and push the request to a queue of some type so that a worker role which actually generates the invoices will work in a nicely load-balanced environment.

The question is now, how could the external clients get notified that a new invoice has been generated and possibly even sent through an e-mail at the e-mail address specified in the request? Well, WebHooks could solve this problem quite nicely:

  1. the worker role would first generate the invoice
  2. once it is generated, considering this is an event of its own type (e.g. invoice_generated), it would raise this event and call a URL the customer has configured, only if the customer chose to receive requests for this event type
  3. next, the worker role would try to send the invoice attached in an e-mail specified by the client when it originally created the request
  4. if the e-mail was sent successfully, the client could again be pinged at the URL the customer configured with another type of event (e.g. email_sent), considering that the customer chose to receive request for this event type

It’s probably obvious by now that there’s a tremendous amount of work left to be done by the developer in order to send out a WebHook request, if that WebHook request is generated by a HttpClient object – or anything similar.

Don’t get me wrong – there’s nothing wrong with this approach. But there’s a better way of doing all this if-registered-get-URL kind of logic when it comes to WebHooks and .NET code.

Put The NuGet Packages To Work

At the time of this writing, there are exactly four NuGet packages containing the Microsoft.AspNet.WebHooks.Custom name prefix and the reason for this large number is going to be explained throughout the remainder of this post.

First, there’s a package called simply Microsoft.AspNet.WebHooks.Custom, which is the core package you want to install when you’re creating your own custom WebHook. Additionally, there’s a so-called Microsoft.AspNet.WebHooks.Custom.AzureStorage package which will work like a charm when you want to store your WebHook registrations in a persistent storage – and yes, by now I’ve spoiled the surprise. The NuGet packages not only send WebHooks, but they also do the entire registration and selection based on event registration story for you as well, and this is not exactly obvious in my humble opinion. Third, there’s an Microsoft.AspNet.WebHooks.Custom.Mvc package which aids in the actual registration process, should you application run as an ASP.NET MVC application. Lastly, there’s an Microsoft.AspNet.WebHooks.Custom.Api package which handles does a great job by adding an optional set of ASP.NET WebAPI controllers useful for management purposes of WebHooks, in the form of a REST-like API.

I’ll keep things simple in this post, so rather than focus on the magic which comes along with the .Mvc, .AzureStorage and .Api packages, I’ll simply create a console application that will act as a self-registree and sender of WebHooks. In order to intercept the WebHooks and check that the implementation actually works, I’ll create a plain simple WebAPI application and add the required NuGet packages to it so that it can handle incoming WebHooks requests.

The entire source code is available on GitHub here.

As you’ll see, the majority of the code currently runs in Program.cs. The work done id the Main method is simply in relation of getting things ready; more specifically, I first instantiate the object called _whStore and _whManager – the latter requires the _whStore as a parameter. These object are responsible for the following:

    1. What events he/she is interested in, in the form of Filters. This will instruct the manager object which will do the actual sending to only send web hook requests when those specific events occur
    2. Its secret, which should ideally be unique – this secret is to be used in order to calculate a SHA256 hash of the body. The subscriber afterwards should only accept WebHooks which contain a properly calculated hash over their request bodies – otherwise, these might be scam WebHooks
    3. A list of HTTP header attributes
    4. A list properties which are to be sent out at each and every web hook request with the exact same values, no matter what the event is
  • _whManager is the do-er, the object which will actually send the web hook request. Since it has to know to whom to send the web hook requests in the first place, it requires the WebHookStore-type object sent as a parameter in its constructor. In addition, it also requires an ILogger-type object as a second constructor parameter, which is going to be used as a diagnostics logger
class Program
{
   private static IWebHookManager _whManager;
   private static IWebHookStore _whStore;

   static void Main(string[] args)
   {
       _whStore = new MemoryWebHookStore();
       _whManager = new WebHookManager(_whStore, new TraceLogger());
      SubscribeNewUser();
      SendWebhookAsync().Wait();
      Console.ReadLine();
   }
   private static void SubscribeNewUser()
   {
       var webhook = new WebHook();
       webhook.Filters.Add("event1");
       webhook.Properties.Add("StaticParamA", 10);
       webhook.Properties.Add("StaticParamB", 20);
       webhook.Secret = "PSBuMnbzZqVir4OnN4DE10IqB7HXfQ9l";
       webhook.WebHookUri = "http://www.alexmang.com";
       _whStore.InsertWebHookAsync("user1", webhook);
   }
   private static async Task SendWebhookAsync()
   {
      var notifications = new List<NotificationDictionary> { new NotificationDictionary("event1", new { DynamicParamA = 100, DynamicParamB = 200 }) };
      var x = await _whManager.NotifyAsync("user1", notifications);
   } 
}

 

The good thing about a simple ‘Hello, World’ sample application

The good thing in this sample is that WebHooks can be, in my opinion, self-taught if the proper explanations are added. More specifically, the reason for the existence of the IWebHookStore interface is due to the fact that you’ll most likely NOT use a MemoryWebHookStore in production workloads, simply because stopping the application and running it again will completely delete any subscriber registrations – ouch.

Therefore, implementing the IWebHookManager interface will help you a lot, meaning that you could implement your own database design for storing the subscriber registrations along with all the properties, extra HTTP headers they require whenever, based on the events (a.k.a. actions, a.k.a. filters) they chose in a form somehow. However, please be aware that there’s an .AzureStorage NuGet Package I mentioned earlier which eases the development even further, by auto-“magically” doing the persistent storage part of the registration on your behalf – uber-cool! I’ll detail the process of using Azure Storage as your backend for web hook subscriptions in a future post.

Additionally, there’s an interface for the manager as well which only does two things (currently!) – verify the web hooks registered and create a new notification. There are a few things which are important for you to keep in mind here:

  1. Notification will be done by sending in the user name as a parameter. If it isn’t obvious why you’d do that since you’ve already specified the users’ usernames upon registration, remember the flow: users register, an event occurs in the system on a per-user-action-basis, that particular user gets notified. The second parameter is an enumerable of notification dictionaries, which is actually a list of objects where you specify the event which just occurred and determines the WebHook request to be fired in the first place – since the notification can also send extra data to the subscriber in the request body, this parameter cannot be a simple string, and as such requires two parameters when instantiated: the event name (as a string) and an object which will eventually get serialized as a JSON object.
  2. I’d argue that the default implementation of IWebHookManager, namely WebHookManager, will meet most of your needs and there’s probably little to no reason to implement your own WebHookManager instead. If you’re not convinced, take a look at its source-code (yes, Microsoft DOES LOVE OPEN-SOURCE!) and check out the tremendous work they did so far on the WebHookManager class. I do have to admit though, that in term of coding-style, I’m very unhappy with the fact that if the manager fails to send the web hook request, no exception or error code will ever be thrown from the .NotifyAsync() method – this decision might have been taken since the method will most likely be called from a worker-role-type application which shouldn’t ever freeze due to unhandled exception. If that is the case, too bad that you, as a developer, cannot take the decision on your own instead. On the other hand though, remember the ILogger object (of type TraceLogger) you used when you originally instantiated the manager – many methods will eventually use the logger to send out diagnostics and these could help a lot when you’re trying to figure out if any web hook requests where sent out.

And since I’ve mentioned ILogger, let me remind you that if you add a trace listener to your application and use the already available TraceLogger type from the NuGet package, you will get the diagnostics data within the trace you’ve added as a trace listener. Should that be of type TextWriterTraceListener, the traces the WebHookManager creates will be written down on the disk.


 <system.diagnostics>
   <trace autoflush="true" indentsize="4">
     <listeners>
      <add name="TextListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="trace.log" />
      <remove name="Default" />
     </listeners>
   </trace>
 </system.diagnostics>

Options, Options, Options…

I’ve mentioned earlier the usefulness of the interfaces the nugget NuGet packages bring along due to their flexibility of covering any scenario you’d need. There’s however something even better than that, and that’s Dependency Injection support. More specifically, the NuGet packages also has a so-called CustomService static class which you can use to create instances of your WebHookManager, WebHookStore and so on and so forth.

Conclusion

WebHooks are here to connect the disconnected nature of the web and are here to stay. They are certainly not a new technology, not even a new concept – but it could still revolutionize the way we trigger our REST-based endpoints to execute task-based operations. If you’re new to WebHooks, get started today. If you’re a hard-core ASP.NET MVC developer, integrate WebHooks in your projects today. And if you’re an Azure Web App developer, why not develop WebJobs triggered by WebHooks? Oops, I spoiled my next post’s surprise :-)

 

Happy web-hooking-up the World!

-Alex

Hi guys,

It has been a while since my last post and that’s because I had quite a busy summer; more specifically, besides my day-to-day job, a few trips and conference preparations for the 2015/2016 season, I also got the chance to work with O’Reilly on one of their video trainings. So in other words, I hereby kindly announce my first project as a trainer for O’Reilly Media.

oreilly logo

From their website:

O’Reilly Media started out as a technical writing and consulting company named O’Reilly & Associates. In 1984, we started retaining rights to manuals we created for Unix vendors. Our books were grounded in our hands-on experience with the technology, and we wrote them in a straightforward, conversational voice. We weren’t afraid to say in print that a vendor’s technology didn’t work as advertised. While our publishing program has expanded to include everything from digital photography to desktop applications to software engineering, those early principles still guide our editorial approach.

Read More →

I’ve recently added a cool feature on http://alexmang.ro, namely subscriptions. If you like my writing or ever stumbled upon one of my articles via your favorite search engine (Bing!, of course), why not automatically get notified when I post anything new?

Blog subscription

Blog subscriptions works by simply typing in your e-mail address in the ‘insert e-mail here‘ text box on the right column and clicking the ‘Keep me posted!‘ button. After subscribing, you’ll get an e-mail from donotreply@wordpress.com whenever I post a new blog post.

Happy coding!