Solving complex performance problems in .NET Core [Webinar]

| 40 min. (8359 words)

“It’s very much clear that .NET Core is the way going forward. Certainly new features and very much performance rated features seem to be only going in one direction. So, there’s this added incentive to move over.” - Matt Warren, performance expert at Raygun.

What to expect in this webinar

Today, our host Andre talks to Matt Warren, .NET (C#) Developer at Raygun and Microsoft MVP. Matt shares the main differences between .NET and .NET Core with a focus on the performance improvements, and how we can measure those results using Raygun.

Matt knows his end users well, and brings over five years of APM experience to the table to deliver a detailed look at how to profile and trace multiple threads so you can find and diagnose poorly performing queries in seconds.

Watch the recording above for tips for setting up an automated workflow that allows you to easily identify and reproduce issues, and what’s unique about .NET Core architecture so you know where to look for common performance bottlenecks.

Show notes

  • [0:3:57] There are more reasons to move over to .NET Core for a new project than not
  • [0:5:29] What our learned from our own migration about the differences in performance between .NET and .NET Core
  • [0:11:32] What the future holds in .NET Core 3.0 when it comes to performance Improvements
  • [0:14:27] The performance improvements in the Kestral web server that you can leverage to make sure performance stays high
  • [0:17:23] Async all the things! Why asynchronous code has become more ubiquitous and best practice - even if we aren’t aware of it.
  • [0:20:36] We live in a cross-platform world now. How can we use that to our advantage, and where are the gaps we need to be aware of?
  • [0:26:23] Demo of .NET performance counters
  • [0:28:36] Concerned about .NET Core, AF and .NET performance?
  • [0:29:20] Does async always improve performance?
  • [0:32:45] What to do if you’re experiencing excess memory when adding tables to .edmx files.
  • [0:34:02] In .NET Core, should still use ConfiguredAwait(false)?
  • [0:37:07] Demo of the Raygun APM dashboard, and a look at how to measure performance in Raygun

Resources

Full transcript

Andre: So today we’re going to talk to you about solving complex performance issues in .NET and .NET Core. I’m really excited to get into this topic and I had a bit of a dry run with Matt a couple of days ago and was really astounded with the depth of knowledge that he brings to the table. So, I’m really looking forward to what he’s going to talk to us about today. First of all, a quick introduction, Matt do you want to jump to that next slide?

Andre: Matt is the .NET Performance Expert. He specializes in application performance monitoring and I’ll let him explain a little bit about what that means and how he ended up in that niche. He loves measuring, finding and fixing performance issues, which is something that was really obvious when we were running through these slides a couple of days ago. He’s an active blogger at mattwarren.org. Make sure you go check that out. Microsoft MVP, you can find him on Twitter and as mentioned he is our UK based Raygunner, working for Raygun for a little while now. Matt do you want to talk about how you ended up at Raygun and also how you ended up kind of diving into the depths of .NET, .NET Core performance?

Matt: Sure, thanks, that’s a lot of introduction, that’s nice. I might have to frame that and record that. I’ve been doing APM, application performance monitoring for around five years or so now, maybe a tad longer. I almost fell into it by accident but I realized it was a good fit for me because there’s a few things that I found quite unique and interesting in the APM spaces. One is that you’re doing tools for developers.

Matt: That’s quite nice, hopefully, I know my end users quite well, which is always useful. But the other side of it is you really have to have a good appreciation of internals. I’m never quite sure if I found this role because I liked internals or I learned to like internals because the role is so chicken and egg. But either way, it’s a good fit for me. That’s how I found it there.

Matt: And I also realized along the way that you have quite a nice excuse in APM because you’re writing some software that’s helping other people monitor their software and one of the golden rules is you never want to be the thing that’s making their software run slower or unduly running slowly. There’s always a bit of overhead for the sort of things we do but it basically means I have an excuse to care about performance quite early one because you don’t want your APM tool to be making your site worse, really.

Matt: That’s one of the golden rules you sort of live by in this sort of space. So there’s kind of all the things sort of fit together well for me, really. I got an excuse to understand things under the hood and I’ve got an excuse to care about performance and ultimately help other people care about performance by the tools I work on.

Andre: Excellent, that’s very cool. All right. We’ll dive into your slides and I’ll hand it over to you and I’ll turn off my video. But before I do I’d like to ask everyone a quick question. This is just kind of to gauge where you are at in terms of moving over to .NET Core. I’m going to launch this poll, it’s only one question. I’m hoping everyone can see that. Have you made the switch to .NET Core? Please drop in your answers, I can see the answers coming in. I think I’ll be able to share it with you guys in a second.

Andre: Let’s give that about 10 more seconds. We’ve got 40% of people with a live .NET Core app. 40% in the process, 5% are saying mate it’s not going to happen, 22% are saying no but it will be happening soon. We got 85% respondents, I think we’re pretty happy with that. Let’s end the poll and I’ll just share the results really quickly. Is that in line with where you think the industry is at, Matt?

Matt: I think .NET Core when it first came out was different but I think we’ve reached a kind of, well if not tipping point we’re getting close to it. It’s become certainly for new projects, I see that there are more reasons to go to .NET Core for a new project than not. Basically, it’s hard to avoid it when you’re rewriting old projects, that’s a kind of different cause, very different. But I think it makes a lot of sense for anything new or anything you’re looking to rewrite, there’s a lot of benefits to .NET Core. Part of what we’re going to talk about today is some of the justifications, some of the differences around performance in .NET Core and why it’s worthwhile. Looking out for the ones that are on the edge maybe, or thinking about it and what benefits they might get.

Andre: Well, it sounds like 80% have either done it or are in the process. But it’s probably one of those things you don’t just switch over as soon as it comes out. You want to make sure it’s where it needs to be and then you’ve got all your ducks in a row. That’s a pretty indication. I’ll stop there. That should be gone. All right, Matt, I’ll turn off my video mate and I’ll hand it over to you. Once again, if you have questions as Matt presents today, please drop them in the Q&A. Also, if you want any chat, drop it in the chat, I’ll be monitoring that stuff and I’ll be asking questions at the end of the show. All right Matt, take it away mate. I’ll turn this off.

Matt: Okay, thanks, Andre.

Andre: No worries.

Matt: We’re going to start today, I don’t mean to say this evening because I know it’s not this evening for everyone. Anyway, we’re going to start today with a look at .NET Core versus .NET Framework. Particularly talking about getting onto to things around multi-threaded and that’s around the Raygun side of things, what we can provide as a product there and some examples and demos at the end.

Matt: Let’s make a start. I think from the sound of it, people are very familiar with .NET Core versus Framework and some of the differences. But I just wanted to really in this section is to actually focus around the performance side of things or the changes that are around related to performance and why that matters. Some of this is from general things but quite of bit of this is things that myself, my colleagues have picked up as we’ve ported our own tooling, the Raygun tooling and products to .NET Core.

Matt: Some of it is from experience, some of it is from general things but it’s very much with a focus on not the general changes to .NET Core or Framework, but the ones specifically around performance and where there’s generally added performance at .NET Core that you might not see in .NET Framework. Particularly as we go on later of how you can measure that and see that going on.

Matt: I want to start, with this just as a reminder if people aren’t clear about where we are at the moment with .NET Framework versus .NET Core, and this is obviously not me saying it. This is Scott Hunter, Director of Program Management at .NET. We’ve reached a point now where things have been different over the last few years. If you had started with .NET Core in the 1.0 timeframe you had a different experience of .NET Core than you have today.

Matt: But, it’s very much clear that .NET Core is the way going forward, and certainly, new features and very much performance rated features as well seem to be only going in one place as well. So, there’s this added incentive, I guess, to move over and this will change a little bit in the future, I guess with .NET Core five or .NET five and things working that way. But, a lot of what I’m going to be saying in this talk is stuff that is only available in .NET Core, it has not been moved to .NET Framework as well.

Matt: But let’s start at the bottom, my favorite place I guess, with runtime changes and particularly around, as I said, this is only the focus of performance. Many of you will be aware there’s a new type that’s popped up in .NET Core Span, and this doesn’t intend to be a tutorial about Span, there’s a link there and we’ll make all the slides available for people if you want to follow the links. But there’s a link there to a full tutorial about it. But the main reason this has come about is performance.

Matt: What Span let’s you do is save on memory copies and we actually ourselves had this, we writ using Span in parts of our product where we’re pausing lots of messages off the network and wanting to pull out parts of that and before without Span, you would have had to copy memory around. That’s the main problem Span is solving. Why it’s particularly interesting I think in the .NET Core runtime is because fundamentally the same support isn’t there in .NET Framework.

Matt: This is one of the first features that wasn’t backported to .NET Framework as it was to .NET Core. So you can use Span in .NET Framework but it’s not as efficient, it’s not as ingrained in the runtime and it’s not as ubiquitous in terms of APIs and supporters. So, it’s part of a general trend that we’re starting to see, the divergence, this is one of the first examples, we’ll be seeing it more and more where features are going into .NET Core runtime. One of the parts, it’s obviously that Microsoft doesn’t want to put investment to places. I guess that’s part of it, but the other part of it is that .NET Core opens up new possibilities, is not tied into the OS in the same way, the .NET framework course through windows updates.

Matt: So they’re able to [inaudible 00:09:36] in the runtime in a different way. You can have more than one runtime installed on the machine. And the thing that would have affected compatibility was a big [inaudible 00:09:46] a very risky in .NET Framework, .NET Core not there and caring about compatibility, but they’re certainly looking to put things in the runtime that they wouldn’t have done before. So a Span is a fairly fundamental change in the runtime, and so, really focused on performance and that performance is coming through not having to copy memory, which has advantageous effects on the garbage collector. So that’s how this ends up as performance and improvements.

Matt: Another one that’s maybe more clearly performance improvement, to sign called Tiered Compilation. And this again is just in .NET Core. It was available since .NET Core 2.1 as a preview. It turned off by default. I believe this is still scheduled to be turned on by default in .NET Core 3.0. And this helps several things, but what the graphs here show taken from the blog pies from Microsoft is that it can really help on startup time. In the world of microservices and spinning up as your functions and all those things, where you care about startup time, Tiered Compilation can help.

Matt: And very briefly, it breaks the old dependency that the .NET just-in-time compiler, in .NET classically, we’ve had a just-in-time compiler that the chip compiler will compile the method once, Tiered Compilation gives it the opportunity to compile a method multiple times. And it does a very basic compilation the first time, which is why it helps startup time. And then, later on, it says, “Is this method being executed a lot?” If it is, I’m going to have another go at compiling it, but this time I’m going to do the full optimizations.

Matt: And I think as it comes in .NET Core 3.0 and in the future, we’ll see more and more complex optimizations which will help performance down the line because the .NET just-in-time compiler has always been slightly limited because it’s just in time, you can’t execute the method until the component is finished. So it’s hard to have a bit of a tradeoff. Tiered Compilation helps that trade-off and down the line, we may well see more optimizations done. But certainly, at the moment, we’re getting increased startup times, sorry, decreased startup times because it’s able to do it quicker at the beginning.

Matt: On top of that, there are pure performance improvements all the way through .NET Core. They’ve written the blog post every single release pretty much, and if you were to look at these, follow these links through, you’d see tens, probably hundreds altogether of performance improvements. And it’s very much, I’ve heard it described as a peanut butter effect where if you get lots of small performance improvements, but across the whole runtime based class libraries, framework, we all benefit, and lots of small improvements do start to adopt. So individually, we might only be talking milliseconds or less for some of these improvements.

Matt: But when they’re in common bits of code that we all use day in day out. And to give more concrete examples here’s some … Just I picked at random from those lists you can look in your own time the rest of them. But we’re saying these are the things coming in and it’s not just Microsoft engineers because .NET Core is now open-source, we’ve seen the community contribute. There are changes that make queuing and enqueuing twice as fast ToList enumerable when you’re doing link and call to lists that can be optimized.

Matt: Equality defaults compares for enums and then some work is done at the bottom one socket handlers. This is what I mean about this effect. We’ll benefit and it’s happening across the entire ecosystem of .NET. So working our way up. We started with runtime and we’re now talking improvements in the base class library, and the framework libraries that we make use of. But probably the big one that people may be fairly familiar with is the Kestrel web server.

Matt: So, this is the tech that empowers benchmarks, and we’re talking here for a very straightforward plaintext application. It’s not saying that we’re always going to get the seven-plus million requests a second, this is like the hello world benchmark, but the point is that now, the ASP.NET Core is or .NET Core in general I suppose is showing up high up in the list. It wasn’t before basically. And this is really beneficial because even if we’re not in our apps that we’re working on running at 7 million requests per second, at least we know the runtime is capable of that. And so maybe it’s slightly marketing, but I think it’s good to know that the .NET runtime can compete in this high-end space alongside C++, Rust, Java and some of the other ones that are represented there.

Matt: And the other nice effect to this is, there’s a lot of optimizations that went into making Kestrel faster that trickled down. So some of them are just in the Kestrel web server themselves, which we can use if we want to, on .NET Core, but more significantly, some of those made their way back into the client libraries for HTTP or into the parsing libraries for ULRS or things like that. So this is not just contained in the Kestrel web server where you get to benefit from these. And I know as well, I’ve seen cases where someone’s justified sorting out performance issue because it means that the Kestrel web server is running slower. So it’s a forcing function I guess, to make sure that performance stays high.

Matt: But onto some other… Maybe it’s hard to… Well, we’ll talk about why this is performance-related. But there’s definitely a change in ASP.NET Core around asynchronous code and it’s become more ubiquitous. This is from the ASP.NET Core performance best practices page and it’s very clear that now, calling out that you want to be using asynchronous codes wherever possible. But more than that, the last point that’s on the screen here is actually the entire call stack as asynchronous.

Matt: It means that Kestrel web server and anything that’s built on top of that, the MVC pipelines and all those types of things, Razor Pages, are all… I guess asynchronous by default is one way to think about it. And that helps performance because as it talks about in the first tasks here, we’re not blocking threats, we’re not tying up threats. So it means that scale and scalability, everything at .NET generally is built on top of the thread pool. The thread pool can be shared, it just means that is what helps things like Kestrel to scale up.

Matt: We definitely found this as we’ve been working on supporting Raygun for .NET Core is that we’ve had to add a whole lot of support to cope with asynchronous code. And we’ll show some examples of that later. Because the part that becomes tricky as you can no longer rely on anything running in one thread. So in terms of analyzing your performance, you want something that lets you say … When your code starts running in one thread but resumes in another thread, you want to make sure you can analyze that. So that’s part of what we’ve put into the Raygun APM tool recently.

Matt: That’s around asynchronous, and that’s become more and more ubiquitous and certainly best practice and even if we don’t aware of it, in our own code is certainly the code in ASP.NET Core, and you’ve probably seen over time, recently more and more async methods appearing with the async suffix on the end of it, although there are some debates about whether people like that style. Anyway, it seems to be the way there are methods async. Just to explain what we mean with the supporting Raygun, is that we have the ability to let you visualize your code or profile it when you use the async and await keywords, which is probably the main way that most people are doing asynchronous code. We also support the tasks that this is all built on top of.

Matt: Obviously, if you really want to, you can manually schedule Task.Run, Task.Start. Although, I’d imagine most people are using asynchronous keywords. First, we also support everything running through the thread pool, so you can keep things through the thread pool timers, and this is all available in .NET Core 1.x and 2.x support. And we’ll be available in .NET Core 3, some point after it’s officially released. And just as a slight side story to get a bit of an insight into things you run into when you’re working on an APM tool. If you haven’t guessed it by now, to make all this work, myself and my colleagues have to become fairly knowledgeable about the internals because we need to pull out internal implementation details of some of these things, to be able to extract the information. We need to be able to build up the traces that we present in our dashboards and particularly in the terms of asynchronous code of how we stitch together things from different threads.

Matt: And we need to be able to follow the flow as your code starts in one thread and it may be resumed and another or resumed, and even in the same thread, we need to follow things through. And what that means is that we’re often relying on internal implementation details, which change in every single version, and I had the fun job of updating all our supports across every single one of those .NET Core version. I know them quite well by now. It means that if someone refactors and obviously there’s a lot of refactoring going on for performance reasons actually, it means we have to track these things.

Matt: And this is not something you yourself worry about, but it does mean we have to have an understanding of what goes on under the hood, I guess is one way thinking about it. And I for sure that the support again for .NET Core 3.0 will be the same lines. We’ll have to rework some of our stuff to support the different ways is all done because there’s no assumption by the people writing the code that this stuff changes all internal details. But these are all things that we have to pick out to be able to support in our product.

Matt: Aside from direct performance things, there is other things that are changing in .NET Core. If you’re not aware of them, and even if you’re not experiencing, it’s handy to know that now we live in a cross-platform world in .NET. If we were just .NET framework developers, we could rely on things just being windows and that’s fine, and that’s the heritage of .NET. But in the .NET Core timeframe, it was released on Windows and is now initially released on Windows Linux, and it’s not expanding to other OSs, and this from raspberry pies and another thing is there’s lots more different OSs and the CPU architectures that can run on.

Matt: And what that generally means is a couple of things. One is that you have to really test the function and see. .NET is a platform and it attempts to abstract a lot of things away, but it also various points relies on the underlying OS. And just to show some examples here, if there’s different places comparison of strings when you get down to fall streams and flushing buffers obviously works. That’s very much tied into the OS even things like, starts with and end with an index and things like that works like differences.

Matt: There are various places the .NET Runtime calls out to the OS to provide some functionality in that and doesn’t guarantee that is not trying to make it behave the same or behave as he did on Windows, on Linux. Is doing the thing that makes more sense to the platform. And we’ve definitely ourselves run into this again as we put some of the right-hand tools across to Linux with .NET Core. And we’ve seen this around, I just fixed something actually the other day where it is to do with network handling and when you see [inaudible 00:22:28] processes, there’s a couple of places where we found … We had to do things slightly different under Linux and Windows.

Matt: Some of these may be older issues, they might be fixed. When it’s a clear bug it will get fixed I’m not saying that, but there is times where the behavior is just different and it’s relying on the ice and the .NET Run time is. I’ve seen more cases recently where they’re saying, “Look, we’re relying on the underlying OS. We’re not going to make it behave the same in .NET. We’re going to rely on the behavior the OS. So it’s just really a caution to look out if you’re getting into this cross-platform side of things.

Matt: We’re bringing it back to performance. The other side is that, you can’t necessarily rely on your performance being the same if you tuned your .NET code, sorry, you’ve tested your .NET code on the Windows, it’s just to encourage people that if you’re looking to run your application on Linux as well, you’re going to want to do performance testing on Linux. And these are just some I picked up with a simple search. Again, maybe these are currently all open issues, maybe there’ll be fixed at some point.

Matt: But maybe sometimes it’s because it’s relying on underlying OS functionality that is different. So it’s just something to bear in mind in the foremost space that .NET has come from Windows, everything is very tuned to Windows for the bulk of the lifetime of .NET. Stuff will get improved in Linux over time. But we still … I guess there are only three years into .NET running on the next three or four maybe as opposed to 15 odd years of .NET running on Windows. It’s just something to bear in mind that from the performance side of things you want to look out for this.

Matt: The other side of Cross-platform is the diagnostics is again for similar reasons, we’ve had .NET running on windows for 10 plus 15 years. So the tooling is very much tailored to that in the new Cross-platform world, some of that tuning won’t come across. One of the big examples is the perf tools that people have used, I’m familiar with that and WPF application that’s not going to be running on Linux. There’s been a recent initiative to do is to make more command line tooling to try and fill this gap. I’m going to show one of these tools in a moment.

Matt: If you’re familiar with anything around .NET Core, you probably use dotnet build, dotnet execute, but it’s now being broadened out and there’s dotnet-counters for performance counters, dotnet-trace to run a performance trace and that you can then load up in perf so you can capture on Linux and then load up on perf or other tools on Windows, .NET dump for memory dumps. So these are the initial ones and I believe there’s plans to expand this out. So it’s nice to see, ensuring that there’s a Cross-platform experience, the tuning runs the same across the different platforms at .NET supports.

Matt: That does mean however though that certain tooling that we’ve got used to over the years may not be coming across to the Linux. It’s just saying again to bear in mind, but certainly, what I seen this coming for, they’ve announced it recently under this blog post here, these diagnostic improvements that are going to be there as of .NET Core 3 are really nice for working from the command line across different platforms and OSs.

Matt: And Raygun as well, we’re looking to support or profile or an agent on Linux and if that’s something you’re interested in trying out, please let myself or Andre know through the chat and we can put you, when they’re available, we can let you know about that. But we’re certainly looking to have all the things that we’ve got available on our products, APM products on Windows, and we’re looking to make those run via .NET Core and Linux as well.

Matt: Just as a [inaudible 00:26:24], as opposed to lifetime we’re here to see what it show you, what it looks like. But this is what you get with the dotnet counters. And so this is the thing you can expect and it’s quite nice because out of the box, it gives you all the useful counters that you’d expect number of GCs per second, the size of the GC heaps, it’s a really good tool to start off with. This is available on Windows and Linux on the .NET Core 3.0.

Matt: There’s other performance counters, there’s a whole range of performance counters and what they did for .NET Core is made … If you’ve done work in performance tracing and performance counters to some extent in .NET before you probably would have used event tracing for Windows ETW. It’s not obvious from the name that’s not Cross-platform when it’s very much tied to the Windows is piggybacking off, working on top of what the OS provides. So, they’ve made a cross platform tooling and it’s coming out of the runtime of event traces, event pipes and there’s a whole system of it. But what it means is that the runtime can produce metrics, things like we see here.

Matt: And depending on the different OS, it can output in different formats. So on Windows it can output it to something that makes sense to Windows. On Linux it can use the [inaudible 00:27:57] of TNG libraries that are built into the OS. So, definitely making an effort to make the diagnostic as of .NET Core 3.0 for sure, making the diagnostics more of a level playing field across the operating systems that wasn’t necessarily there. I might just pause for a second. I think I’ve seen some questions in the chats. It’s probably a good time to … If there’s any that aren’t being answered by the people I’m just trying to look through. Is there any particular ones, Andre, that I haven’t …? Sorry. Let me just try and work from the back.

Matt: Concerned about .NET Core, AF and .NET performance. Yes, that’s a good point. I guess that’s with entity framework, I’m not going to claim to be an expert. I do know that entity framework core is a very different thing from entity framework unto a framework. So excuse me. Sorry and ISP.NET framework. [inaudible 00:28:58] is a context that is not move quiet. Configure away first, Nice. Sorry, oh no, I’ll see in the answers. Okay. The ones I was seeing are answered. So it’s [inaudible 00:29:18].

Andre: Matt, can you hear me?

Matt: Yes.

Andre: There is one more question that says, “Async all the things, does that always improve performance? I know because it happens under the hood sometimes it can be issues. We might be actually when it goes asynchronous, it actually gives more overhead unnecessarily. Any insights on it?”

Matt: That’s a good question. Yeah, I missed that one. So yeah, there is a trade-off. And actually we’ve seen, if anyone’s been following the core effects and other repositories, there’s been things have tried to fix and so one part is that potentially I think, the machinery if you like, the state machines and stuff, there’s a bit of an overhead there for allocations. So they brought in value task, which is a task but as a struct so they’ve addressed that in that way. I know over time has been more and more stuff. So most of the async, we don’t get to control that. We just used the async keyword and it’s the compiler and run time that sort of, I guess how often half the compiler does its job and then relies on some of the machinery being provided by the runtime.

Matt: And I know for sure that they’ve done things in the .NET Core two timeframe, maybe .NET Core 1 to improve some of that. I think for the async side of it, if we’re just using the keywords, generally that’s relying on works off compiler team and the Microsoft Runtime team or basically our big team to prove it. In our codes, well Some places we have to use async if you want to play nicely with MVC and pipelines and stuff. Possibly the one place where we have more of an issue ourselves in our code is like just needlessly spinning up threats or we probably shouldn’t, that can be one case where things about and just paralyzing short-running task, just pushing the load short-running task into multiple threads that we spin up ourselves is generally a bad idea.

Matt: Why normally we can get around this is start to queue things on the threadpool and then let the threadpool that is very intelligent about when it adds extra threads or when it really uses existing threads to do that. So I guess one way is to try and stick to the high level stuff. Async await, TPL, tasks, those sorts of things and rarely if ever fall down to spitting up raw threads that’s generally a good advice. And that document I included earlier about performance, ISP .NET Core, the guidelines talks a little bit about that. Any other questions around what I’ve covered so far, any bits?

Andre: Did you see the question around EF6 and when the edmx as you go around the performance is decreasing drastically? We have a console application and without much of the edmx updates with one table and edmx, it takes a memory of 26 megabytes. And when I add about 20 tables to edmx it takes around 70 megabytes. Are we using Autofac IoC container?

Matt: Yeah. Good question. Sorry, I’m not the [inaudible 00:32:46]. I would recommend getting hold of something like perf or another tool maybe to profile what’s going on there. It seems like it’s quite a lot of excess memory. It’s always a trade off sorts of things. We get a lot of nice ability with these high level toolings through a lot of stuff for us. But, they also do a lot of things behind the scenes and it seems a lot sometimes. It’s a trade off we have to live with sometimes and this maybe a bit true with other parts of .NET. Nothing comes for free. Right. Sorry long way of saying I don’t have an answer for the MC framework.

Matt: I’d recommend posting saying on the forums, what I do know is they’re very good about, part of them doing everything in the open is that they do seem very open to people posting things and creating issues. I’ll get hub and I’ve certainly seen people dive in and then hub diagnoses, performance issues and particularly ones around memory. I’ve seen a few cases where people from Microsoft have helped people diagnosis. I’d maybe put together some report or something and do that. But yeah, I don’t have any more to offer specifically around density framework. Sorry.

Andre: No worries. Then there’s also a great chat from Assad around ASP.NET framework. “When using async we were told to use, ConfigureAwait(false), whenever we don’t need access to the context to avoid waiting for the original thread. Request came in on, I’ve heard that this is no longer required in ASP.NET Core how a framework leverage should still use ConfiguredAwait(false), is that true?” It looks like a good discussion there.

Matt: That’s a good answer. I think is [inaudible 00:34:29]. Anyway, someone else’s answer that and that’s … what they’re saying makes sense. So things did change and it’s one of the things that’s been slightly tricky with .NET Core over the ages that stuff … yeah, potentially changed over the versions. But yeah, I think I’ve seen the similar thing to that answer where is a continued doing as you’re doing it. But there has been a bit of a fundamental change around synchronization context. It does change things somewhere. So I’ll defer to the answer that I got from sense of-

Andre: Excellent. All right Matt continue.

Matt: … I was going to cover on .NET Core versus framework. I guess the main source takeaways from my side, from shyness is that fundamentally is a lot of performance changes going on in .NET Core. So we can get some nice benefits and I’ve not seen anything that says that all of those are going back into .NET Framework. I think some how, there’s stuff that’s being pulled back to .NET framework but more fundamental things like Span being integrated into the runtime Tiered Compilation and other things. And I think has obviously as we go over time we’ll see less and less stuff going back that quote I started with from Scott Hunter was clearly the focus of .NET framework is different. So there’s a lot of benefits we can get by moving to .NET Core.

Matt: Certainly the cross platform, a lot of performance benefits. I found that joy to work with the new project files, the recent ones that you can have nowadays with CS approach that there’s some benefits, but performance wise and certainly lot of things going in and I think over time we’re going to see that diverge more and more so that we’ll see the performance results. And when you see those posts from Microsoft talking about things that have gone into .NET Core three is across the board with same performance benefits sites. There’s some compelling reasons to move to it.

Matt: Let me talk a bit about some of the things I’ve been working on recently with some colleagues on the Raygun profiler and a just to introduce some of the things it can do, particularly around multi threaded. As I talked about before, we’re in an asynchronous world now. It’s very hard to get that fall in writing your codes, particularly if you’re doing, sorry, if you’re doing a web server type codes to get very far without using asynchronous codes.

Matt: It’s very much tied into the pipeline of the frameworks we build on top of. So, you want a solution or you want to be able to know around code. If people haven’t seen what the Raygun Dashboard looks like, this is what you would get if you had a simple asynchronous code. And what we’re doing here is in effect stitching stuff from the two threads.

Matt: Above the dotted line, you see the thread where your asynchronous method code started. So you’d have a … we’ll show you some code in a minute of what this can look like, but this could be a web request coming in and then you could do an asynchronous call out to a third party HTP request or it might be a database call or whatever it is, something else going on. And then it resumes. This is actually from our internal test app and that’s why it’s called slightly strange names, but it shows nicely.

Matt: So we’re able to stitch things through in different threads and also we’re able to pick out what really matters, the totally lapse time. What we’re seeing for that very short function on the top left hand corner is that’s the code that run before await keyword and then the rest of it below the line is the resumption that took along away those real work being done.

Matt: Obviously we’re being able to be stitching it all together. We’re actually able to report the real time. Even I happened to cross two threads, the real time of this response was the 1500 milliseconds, one and a half seconds. In this case, we’re synthetically taking it with a thread. Obviously, you might spot at the bottom, but the same applies if it was a real web ProQuest. To look a bit more of a complex example, I’ll show you the code for this one in a second. But this is our kitchen sink case. We can deal with things resuming. And what you find when you start to look into this is … there’s very much a thread pooling and things that kind.

Matt: You may find examples where it resumes on the same thread. It might resume a completely different thread. It might resume on the jump to one thread and then resume on a third thread really that’s all generally hidden away from me when you’re doing this code. But again, we’re able to stitch it all back together and present you the … In a second, I’m going to show these live in a dashboard, but I just want to talk through them. First of all, we’re able to give you a high level overview at the top where you can see the different threads there and then you can delve into them.

Matt: And the final part, with all of this free, we do actually also allow you to integrate in to get help if you give permission to our app. You have control over whether you want to do this. But this is significant because this is the code of the kitchen sink method I was talking about before. So, it’s quite handy if you’re looking at these traces, you don’t need to worry too much about jumping, finding the place in the code that is we can display that in dashboard for you. And yeah, this is just totally a different examples. So we can deal with thread pulls queuing. What we saw in the folded up view, I showed you just a second ago was the two continuations. But we can also do a task run the previous examples and the AsyncAwait example.

Matt: If people aren’t already seeing, this is going to be more and more. There’s more common patterns of, maybe not as quite as low level as far and forget, I don’t think that’s necessarily the most common pattern, but certainly having asynchronous, your requests coming in and then launching one or more tasks and you might have three tasks in parallel and you might wait for one of them to finish or you might just have the Async keyword dotted through your code and being fired off and then resuming back on there. So, this side of things is a test up, so ignore the fact that I use straight up way at the bottom, sorry, tossed away.

Matt: That’s not good practice, but anyway, we need to test the different scenarios that people might do. But the point is we’ve covered as many of these scenarios as we could think of. So just switch over to show you this in a bit more of a live scenario if I can get more. This is the same thing, just show live and we just give you the ability to collapse the threads we point out, we synthetically put in markers to show where threads continue to indicate that this is the point where something started off. Here we see it starting in the first thread and then it continues in the second thread and then we have another continuation from that thread going down.

Matt: And then it’s user code we’re able to switch it over to anyway, that’s the final bit of code is executed. So again, anytime this code is in your GitHub repository that you’ve linked up with the rake on to, we were able to pull that down and see there. Again, for thread.sleep isn’t a general pattern but it’s useful for us for our testing. This is from a test app to show you a bit of scenario as we work our way through. I know that’s more a complex one. This is the one I showed in the demo before actually, so similar. We have multiple threads kicking off different tasks. Sometimes they kicked off in parallel sometimes as a task and with continuation.

Matt: We’re able to identify the different scenarios. And very roughly what we’re doing is any time we’re looking for the different places that new work can be started. I guess is one, it could be a task, it can be a thread different things and we’re able to identify that. And then when the work’s actually done in another thread, we’re able to stitch you back together and give you a nice view, so that you can make sense of it. And this helps because actually, oftentimes what your customer is seeing is this whole four seconds at the beginning.

Matt: But really, in this case, it’s synthetic, so we’re able to show you that, where the time has been spent in this case is spent equally between different tasks. We’re able to see actually, sometimes it might be the time spent waiting for database core or it might be wants to database code time. It might be the time spent rendering a view or whatever. So there’s various different things that are going on. And we also build it up into system code and then your code as well to give you an idea of what’s going on there. I’ll pause there. Is there any questions, I’m trying to look if any more questions come up? Just looking in the chats, yeah. Is there any other-

Andre: There’s a question, if you can give an overview on crash reporting in terms of how the APM works with the crash reporting, is that possible in the demo?

Matt: I don’t know what got set up here. Sorry. [inaudible 00:44:40] without to show. Yeah, to explain, people might be familiar with the recon a reporting tool that’s been around for is the Raygun product for people who’ve had before the APM tool. And we just integrate the two. So if you end up with an error in this, unfortunately I want to show you. If you end up with an error in here, we’ll be able to link to the crash reporting side of things and the tools talk to each other.

Matt: You’ll see the errors report, you can at least view and you’ll be able to jump to the more detailed error reporting, crash reporting down the left hand side and vice versa. We just made sure the tools talk nicely together. And I guess the assumption is that errors can cause performance issues particularly lots of them. So you can jump from seeing errors to see the traces where they’re generated. And you can start with the trace and draw back to the error to see how many times that’s going on. That covers it. Unless you have anything else on that, Andre, I don’t have anything prepared for the error reporting.

Andre: That’s all right. Can you set up what is defined as an error? And then that shows up as an error. Is that something I saw?

Matt: Good question. Sorry. I guess the … so we hook into .NET exceptions, now that’s the very simplistic idea of what we do with error reporting. So any .NET exceptions we have, there’s various different ways in different .NET technologies to be able to … no one exception has been thrown and caught. And we hook into that and then we … It’s sort of a high level start to aggregate, same exceptions going on and then report them in the view here to see, just say that this function. But yeah, it’s all based around .NET exceptions.

Andre: Cool. Cool. Excellent. Well I think that’s all the questions we have in the Q&A and on the chat and we’re running five minutes over time. I think we should probably wrap it up. If anyone has any questions, we’ll follow up with email and you can ask us directly over email and we’ll do our best to answer any questions you have.

Andre: As Matt mentioned, we’ve launched APM a little while ago and we just launched it for .NET Core a few weeks ago. It’s pretty new out in the market and if anyone wants to try it out and take it for a demo, please let us know. And I think that’s it. What we’ll do is we’ll send out this full recording along with the slides, as part of a followup and yeah, just want to thank everyone for joining us.

Thank you, Matt, for sharing your wisdom with us today.