Author Archives: Cameron Dwyer
This is the continuation of my experience with testing the auto-scaling capabilities of the Azure App Service. The first post dealt with scaling out as load increases, and this post deals with scaling back in when load decreases.
On the Scale Out blade of the App Service Plan you can see the current number of instances it has scaled to along with the Run History of scaling events (such as when each new instance was triggered and when it came online). Here we can see that under load my App Service Plan had scaled out to 6 instances, which was the maximum number of instance I’d configured for it to scale out to.
At this stage I removed the heavy load testing to watch it scale back down. In fact I cut all calls to the service so it was experiencing zero traffic. I was starting to get worried after 20 mins as I still had 6 instances and no sign of it attempting to scale back in. Then I realised I had to also setup scale in rules – it won’t do it by itself!
On the same page I configured the rules of when to scale out, I also needed to configure rules to trigger it to scale in again. I configured a simple rule that when the CPU dropped below 70% then it’s time to scale back in an instance. I set the cool down period to 2 minutes so I didn’t have to wait around forever to see it scale in.
After saving those configuration changes, I expected to see the number of instances scale in and decrease by 1 every 2 minutes until it was back to a single instance.
Suddenly an email notification popped up, it had started happening!
I received the notifications and watched the Azure Portal over the next few minutes as it scales back from 6 to 5 to 4 to 3 instances. Then it stopped scaling in. I waiting over half an hour, scratching my head as to why it was failing to scale in. The CPU usage graph showed it was well under the scale in threshold of 70%, in fact it had peaked at 16% during that half hour of waiting. Why were these instances stuck running? I was paying for those instances and I didn’t need them.
On re-reading the Microsoft documentation on scaling best-practices, it became clear what was happening. You have to consider what Azure is trying to do when it’s scaling in. When Azure looks to scale in, it tries to predict the position it’s going to be in after the scale in operation to ensure it’s not placing itself in a position where it would trigger an immediate scale out operation again. So let’s look at how Azure was handling this scenario.
It had 3 instances running, from my first blog post of scaling up we had already established that each instance sat at 55% memory usage and nearly 0% CPU usage when it was idle. The trigger to scale down was when CPU usage was lower than 70%. The average CPU usage was under 15% so Azure had passed the trigger to scale in. But let’s look at what Azure thinks will happen to memory utilisation if it were to scale in. Each of the instances had 1.75GB memory allocated to it (based on the size of an S1 plan). So in Azure’s eyes my 3 instances each running at 55% memory usage required a total of 2.89GB (1.75GB*0.55*3). If we scaled in and were left with 2 instances then those 2 instances would have to be able to handle the total memory usage of 2.89GB (1.44GB each). Let’s do the maths on what the resulting memory usage on each of our instances would be (1.44/1.75*100) 82%. Remember the scale out rules I had set? they were set to scale out when memory usage was over 80%. Azure was refusing to scale in because it thought that would result in an immediate scale out operation again. What Azure was failing to take into account was that baseline memory that every new instance uses 55% memory (or 0.96GB) just doing nothing.
In reality, if Azure did scale in by one instance the remaining 2 instances would both continue to run at 55% memory usage and then scaling down to 1 instance would again result in the last single instance running at 55% memory usage.
Azure auto scale in isn’t the perfect predictor of what’s going to happen and you will need to pay careful attention to the metrics you are using to scale on. My advice would be to test your scaling configuration by load testing as I’ve done here so that you can have confidence that you’ve actually seen what happens under load. As this little test has proven sometimes the behaviour isn’t always obvious and what you’d expect, and a mistake here could lead to some nasty bill shock.
If you are stuck with scenarios when you can’t auto scale in, or you are concerned with scale in not working then here’s a few options to consider:
- Configure a scheduled scale-in rule to forcible bring the instance count back to 1 at a time of day when you expect least traffic
- Configure a periodic alert to notify you if the number of instance is over a certain amount, you could then manually reduce the count back to 1.
I was recently testing the automatic scaling capabilities of Azure App Service plans. I had a static website and a Web API running off the same Azure App Service plan. It was a Production S1 Plan.
The static website was small (less than 10MB) and the Web API exposed a single method which did some file manipulation on files up to 25MB in size. This had the potential to drive memory usage up as the files would be held in memory while this took place. I wanted to be sure that under load my service wouldn’t run out of memory and die on me. Could Azure’s ability to auto-scale handle this scenario and spin up new instances of the App Service dynamically when the load got heavy? The upsides if this worked are obvious; I wouldn’t have to pay the fixed price of a more expensive plan that provided more memory 100% of the time, instead I’d just pay for the additional instance(s) that get dynamically spun up when I need them and therefore the extra cost during those periods would be warranted.
As an aside before I get started, it’s worth pointing out that the memory usage reporting works totally different on the Dev/Test Free plan than on Production plans, and I’d guess this has to do with the fact that the Free plan is a shared plan where it really doesn’t have it’s own dedicated resources.
What I noticed here is that if I ran my static website and Web API on the Dev/Test Free plan then my memory usage sat at 0% when idle. As soon as I change the plan to a production S1 then memory sat at around 55% when idle.
Enabling scale out is really simple, it’s just a matter of setting the trigger(s) for when you want to scale out (create additional instances) and I was impressed with a few of the other options that gave fine grained control over the sample period and cool-down periods to ensure scaling would happen in a sensible and measured way.
Before configuring my scale out rules, I first wanted to check my test rig and measure the load it would put on a single instance so I knew at what level to set my scaling thresholds. This is how the service behaved with just the one instance (no scaling)
You can see that the test was going to consistently get both the CPU and Memory usage above 80%.
Next I went about configuring the scale out rules. Here I’ve set it to scale out if the average CPU Usage > 80% or the Memory Usage > 80%. I also set the maximum instances to 6.
I also liked the option to get notified and receive an email when scaling creates or removes an instance.
So did it work? Let’s see what happened when I started applying some load to both the static website and the Web API.
Before long I started getting emails notifying me that it was scaling up, each new instance resulted in an email like this
These graphs show what happened as more and more load was gradually applied. The red box is before scaling was enabled and the the green box shows how the system behaved as more load was applied and the number of instances grew from 1 to 6 instances. While the CPU and Memory dropped notice how the amount of Data Out and Data In during the green period was significantly higher? Not only was the CPU and Memory usage on average across the instances lower, but it was able to process a much higher volume of requests.
I have to say, I was pretty impressed when I first watched all this happen automatically in-front of my eyes. During testing I was also recording the response codes from every call I was making the the static website and Web API. Not a single request failed during the entire test.
But what happened when I stopped applying such a heavy load on the website and web API? Would it scale down just as gracefully? Read on for part two of this test to find out.
Microsoft has been building out its ‘cloud first’ strategy for many years now. As once separate on-premises server products have morphed into online services, we have been reaping the benefit of tighter integration between these services products. We’ve seen this most strongly with the Office 365 services where SharePoint, Office (Word, Excel, PowerPoint, Outlook, OneNote), OneDrive, and Teams have really become tightly integrated. From a development point of view this cloud strategy has provided Microsoft with a way of delivering a single unified API across the entire surface area of Office 365 covering all these product services. This API is called the Microsoft Graph API. Recently Microsoft added a bunch of it’s security services into the Graph API providing a standard interface and uniform schema to integrate security alerts, unlock contextual information, and simplify security automation.
Microsoft is running a developer hackathon (the Microsoft Graph Security Hackathon) which simply involves using the new security APIs to see what good use you can put it too. There’s some awesome prizes on offer
($15,000 worth!). There’s also some great judges that will be looking at the entries which will give your submissions and ideas some great exposure.
Get coding and submit your entry before March 1, 2019.
It’s no secret that I think Microsoft Teams is an awesome product. I’ve written in the past about how I believe Teams is a great enabler for making people more productive and brings the right tools together in a place that makes a lot of sense.
What are messaging extensions?
Message extensions are available when you are creating a chat message (either a new conversation, or when replying to an existing message). Message extensions assist you by inserting content into the chat message you are composing.
Below is the Giphy messaging extension that allows you to search for an animated image and insert it into your message.
Why develop a messaging extension?
One of the resounding benefits of the Microsoft Teams client is being able to extend the out-of-the-box capabilities and integrate with your existing line of business (LOB) applications. Usually the benefit gain in integrating LOB applications into Teams is that it reduces the context switching for users and they spend more time actually getting work done, and less time moving between application, copying/pasting data or links.
Take a CRM application as an example, if you are discussing a customer in a conversation in Teams and needed to explain where that customer was located, or what the contact details were. The steps you would normally go through would be to open your CRM application, search for the customer, find the relevant details, then cut and paste multiple fields of text switching back and forth between the customer details in the CRM app and the conversation in Teams.
By extending Teams with Messaging Extensions, it is possible to provide an integration into your CRM system that would allow a user to search for a customer in Teams while composing a chat message, select the customer and have the details formatted nicely and inserted in the chat message all without having to leave Teams or even open the CRM application.
When you look at the core LOB applications within your organisation there are some key integrations such as this that can bring about real productivity gains.
Where to start with messaging extension development?
I know I should be doing something a little more productive than this, but hey it’s the holiday season and little LEGO MVP me needs to unwind too 🎄
Wishing you all a very happy, healthy and successful 2019.
Pretty sure I’m not the only one who’s gone down this rabbit hole…
Microsoft Teams is the hub for teamwork in Office 365. The vision Microsoft has with Teams, of bringing existing products and services together into a central hub and minimising context switching, is a vision I share in and have been fostering for over a decade. Before Teams came along I’d spent over a decade of my career working primarily within Outlook and integrating/surfacing other applications and services within Outlook to provide that hub. Outlook was (and still is in most organisations) the first application opened in the morning and the last to be closed at the end of the day. To me it has been blindingly obvious that we can help make users more efficient and productive by bringing the data and information they work with into core applications that they live inside of already, and prevent them having a plethora of applications open on the desktop and constantly switching between them. Outlook was that hub application for me.
Enter Microsoft Teams, a client built from the ground up to bring existing services, applications and product features into a central hub in the context of teams of people working together on a common goal or purpose. Finally Microsoft was no longer thinking in individual products isolated from each other and starting to realise the benefit of combining the power of all of those products for a focused purpose. In essence that is what we have all been doing for many many years, we select a mix of products that allow us to get our job done and where possible we try to integrate them because products that are integrated just make our lives easier!
It’s easy to see why Microsoft Teams has been getting some seriously good traction since it’s introduction and is set to overtake Slack.
I’m sure this uptick in usage was also helped by the fact that Microsoft Teams in now a free offering.
As with any product, it won’t magically fix your business problems simply by being installed and present on users machines. To make any product successful you will need a plan and to execute on it. To this end Microsoft has released an excellent resource in the Microsoft Teams Adoption Guide. This flipbook packs a lot of valuable information into a very polished and concise package. I highly recommend it as your starting point to a successful implementation of Microsoft Teams.
What I particularly like about Microsoft Teams is that it already has a rich extensibility story with developers being able to bring existing line of business application into the Teams client and allowing Teams to be the hub not only of Microsoft products and services but also non-Microsoft products and your own custom applications.
I attended my first Office Developer Bootcamp on Friday last week. I wasn’t 100% sure of what to expect and had a little extra pressure as I was also presenting on Office Add-in development and assisting with the labs.
I ended up having a really fun day. Lots of great conversations had during the bootcamp at Microsoft office at North Ryde. I was a little surprised that the questions weren’t all technical and I fielded a few questions more related to the business side of commercialising add-ins. Thanks to all those that attended and to the great people behind organising the day (Ashish, Amr, John)
Here’s my slide deck from the ‘Developing Office Add-ins‘ talk.
Thanks for everyone that came along to the Sydney SharePoint User Group this month. It was great to be able to deliver so much exciting SharePoint news following all the announcement made at Microsoft Ignite. Given Microsoft Ignite now covers far more than just SharePoint it takes a while to distil the SharePoint specific announcements from over 700 sessions that were presented over 5 days at Microsoft’s biggest conference of the year.
I’ve kept the presentation to just the User/IT Pro announcements (sorry developers I couldn’t fit all the news into a 1 hr presentation!)
Feel free to take this presentation and use it for your own user groups or internal within organisations.
As the dust settles on Microsoft Ignite for another year I’m left going back over my notes and recalling discussions I had for all those key announcements, advice and snippets of gold that will have a real impact for Office developers.
If you are looking for a high level list of announcements made at the conference, the Ignite Book of News is a good place to start although it doesn’t cover many of the announcements that were made in the Office Developer area – this book covers a lot of the Azure announcements, which most Office developers will have a mild interest in (we have to host our code somewhere!)
Here’s some of my favourite announcements:
- Call Microsoft Graph and Web APIs and deploy Extensions across your SharePoint sites
- Deploy your web parts and application pages to Microsoft Teams
- Connect across components with dynamic data capabilities
- Deliver complete applications with application pages
- Harness more of SharePoint with new Microsoft Graph APIs
- Managed access to Microsoft Graph (data connect to bulk export to Azure subscription)
- Notifications API
- Dynamics is now in Microsoft Graph
- New PowerApps templates
- Security API
- Microsoft Teams, Messages, Calendars, Files, and Folders
- In preview but suitable for production use
- Capable of reaching both v1 and v2 services
I thought this years conference was very well run and the volume of people moving about the conference centre wasn’t overwhelming. I had a lot of fun meeting new people and reconnecting with old friends. It’s great to have such knowledgeable Microsoft staff accessible on the expo hall floor (both from a Marketing and Engineering side) to discuss particular scenarios, technologies, ad bounce ideas off.