Understanding cross platform .NET, and why .NET 5 is important

In the beginning there was .NET Framework, and it was all very simple. But then, over time, it grew and grew, and became a lot more complicated. So, let’s have a look at how it all fits together…ish, and how current and future versions try to make life simpler for us developers.

.NET Framework, the OG

In 2002, Microsoft launched .NET Framework 1.0. It was going to be the new way forward for all development things in the Microsoft world. It introduced a new way of writing applications, using a managed runtime and a Just-In-Time compiled, intermediate language, that was produced by compiling source code written in a new language, C#, or in a re-imagined version of VB called VB.NET.

.NET Framework was initially installed as a separate thing, but was soon merged into Windows, and became an integral part of it. This allowed Windows to be delivered with .NET Framework included, allowing Windows to take a dependency on it, as well as saving people from having to download a huge installer to get it installed. But the framework could be updated separately, meaning that the release cadence was not completely tied to the Windows release cadence. At least not initially… However, .NET Framework did not have side-by-side installation capabilities, so only one version could be installed at any time. And in most cases, this wasn’t a problem, as newer versions aimed to be backwards compatible.

The release cadence was however tied to the release cadence of Visual Studio. Whenever a new version of .NET Framework was released, a new version of Visual Studio had to be released as well for you to be able to use it. This made the releases of new .NET Framework versions and new Visual Studio versions a bit more exciting than it is today. Partially because .NET Framework also came with most of its functionality built in. So, if you wanted to build Windows apps, you had WinForms built into the framework. And when .NET Framework 3.5 was released, WPF was baked into the framework. Just as ASP.NET and WCF was built into the framework. So new releases of the framework included completely new application models and lots of new features.

This is also part of the reason why NuGet was initially used mostly for utility libraries, and not whole frameworks and application models as it is today. As the app models were mostly baked into the framework, and distributed with Windows, or as a massive installer, there was no need distribute large things as NuGet packages.

What is in .NET (Framework)

However, before we can dive into how .NET expanded into the world we are working in today, we need to understand some of the innards of .NET. At a high level, a .NET platform consists of a few different pieces. The first one is the managed runtime. This is a bunch of native code that ties into the underlying OS. In the case of .NET Framework, it is heavily tied to Win32.

The runtime adds a nice, standardized layer of APIs on top of the raw underlying APIs, providing us with a much nicer set of APIs to use. These API calls are then often mapped into calls to COM and Win32 by the runtime when we are talking about .NET Framework. Besides these APIs, it includes features like memory management and garbage collector, and assembly loading.

The framework specification also includes what is called a Common Type System (CTS). This defines the basic types that are available in the runtime, and how they are mapped in memory and so on.

On top of the runtime, there is a Base Class Library (BCL) with types that you can use to build your applications. It contains a built-in set of APIs that all application models have access to. Things like fundamental types like System.String and System.DateTime, as well as higher level APIs and functionality, like file access, collections, streams, attributes etc. Basically, the building blocks that everything else is built on top, and that is available to anyone using the framework. These things are often built using native code and integrates with the runtime and the host OS.

And on top of that, there are the application models, which I hinted to just a second ago. The application models are the foundations for the different type of applications that we can build, for example WinForms, WPF, ASP.NET. These all have their own set of APIs and own pieces that make them tick.

All of this is part of the .NET Framework, and is installed on your machine when the framework is installed. This means that any .NET application can just assume that these APIs and assemblies are available to be called.

Besides these pieces that are installed with the framework, you have custom assemblies that you can take a dependency on to help you build your applications. They can be downloaded from the interwebs and installed on the system or added to the application through NuGet. But the important part is that they are code packages that you can use but are not part of the framework.

How does it work…ish

When you start up a .NET application, there is some bootstrapping code that starts up the managed runtime for you. Once the runtime is up and running, it loads in the assembly that contains the IL code for the application that you want to run. As that assembly is loaded, the runtime figures out all the assemblies that it depends on to run and loads those as well. And then it loads their dependent assemblies. And so on, until all the required assemblies are loaded. Then it looks at the entry point for the application and JIT compiles that to machine code that can be executed by the computer that the application is run on.

The JIT compiler only compiles the code that it thinks is needed now. This could be just the code you are executing, or potentially some other stuff it believes you will use in the future and that might as well be compiled ahead of time. This makes it very fast, compared to if it had to compile the whole IL code into native code before it started. And then when you perform an action that requires another piece of your code to be compiled, it compiles that piece of the code, Just-In-Time for you to use it. The compiled code is then kept in memory, so a piece of IL code is just compiled once, no matter how many times it is run.

Note: .NET actually supports “pre-compiling” the code into native code so that you don’t need to do the JIT:ing, making it even faster. For example, all the framework libraries are pre-compiled at install time to make sure that they are as fast as possible when you want to use them. However, the pre-compiling has to be done on the target machine. Often during installation.

Another Note: .NET Core also supports tiered compilation where it can do quick compilation, generating less efficient code initially, and then re-compile it into more efficient code if it makes sense in the future. Making sure the cost of the first compilation is limited, while still allowing very efficient code to be generated later if it is thought to make the application perform better.

So why is this important to understand? Well…this is all information that is somewhat important to understand when the eco system evolves.

And then there was Mono

As the .NET runtime specification, or Common Language Infrastructure (CLI), is standardized as ECMA 335, it makes it possible for people to implement their own .NET runtime. It was even a selling point from Microsoft when it was released, as it would allow people to take .NET cross-platform, and have it run in a lot of places. However, the runtime is a complicated thing, and I think it is a pretty daunting task to take on. This is probably one of the reasons that we didn’t see a bunch of different runtimes popping up all over the place.

However, there were some people that did go and create another runtime called Mono. The Mono project is an open source .NET implementation that was built to allow .NET applications to run on Linux. It didn’t have 100% parity with .NET Framework, but it did allow you to run some applications on Linux and MacOS.

And this is why the talk about runtimes and IL code and JIT compilation is interesting! The Mono project had to implement a native runtime that supported things like garbage collection and assembly loading, as well as provided an implementation of the CTS, and at least some of the BCL. However, they couldn’t really implement the full BCL. Partially because it was a massive task, but also because there is quite a bit of functionality in .NET Framework that doesn’t really make sense on Linux.

However, since the Mono runtime supports assembly loading, and can JIT and run IL code, it can load any assembly, and understand the IL code it contains. However, there isn’t anything in the runtime that verifies that the code that you are loading will actually work… And this is where it gets a little complicated… As long as the code that is being run on Mono isn’t calling APIs that aren’t supported by the platform, everything is fine. But, if you do call something that isn’t supported, you are in trouble. And the kind of trouble depends on the way that the API is missing.

First of all, it could be that the API isn’t available at all. This will cause the JIT-compiler to throw an exception, as it can’t even compile the code.

The second option is that an API might be there but throw a NotImplementedException or do a P/Invoke to an OS API that doesn’t exist. This will allow the JIT-compiler to compile the code without a problem, but if it is called, it will throw an exception. However, as long as that specific code isn’t called, maybe because it is behind an if-statement for example, everything is fine. If it is called on the other hand, it can be caught using a try/catch. Unlike the first option, which causes compilation to fail.

This is not specific to Mono in any way though. This is just the way that .NET is designed and could even affect you between different versions of .NET Framework. However, the tooling tries very hard to help you in these cases to make sure you don’t run into problems at runtime.

And then there was Silverlight, Windows Phone, Xamarin, Unity and so on

However, after a while, new runtimes did start to pop up. We got Silverlight that gave us a .NET runtime in the browser. Then Windows Phone that gave us a runtime that ran on a phone. And then Xamarin took it one step further and used Mono and magic to run .NET-based code on iOS and Android. And Unity gave us the ability to use our .NET code to build advanced 2D and 3D games. Slowly but surely the number of platforms where you could use .NET-based code grew.

The problem is that every one of these platforms offer slightly different capabilities and APIs. They are all .NET runtimes, allowing us to load and execute code in .NET assemblies. But it was a minefield to work in. If you wrote some code and compiled it to a .NET assembly, it could theoretically be loaded in all of the runtime. But it could explode at any point if it happened to call an API that wasn’t supported by the platform it was running on.

This had to be handled on some way to make it usable.

Portable Class Libraries to the rescue

The first attempt at a solution to the problem of knowing what APIs you could (or could not) use for your library, was called Portable Class Libraries (PCLs). It gave you a way to create a project in Visual Studio and define what platforms you wanted it to be able to run on. Visual Studio (or the build system) would then figure out what APIs where available across all the selected platforms, and make sure that you weren’t calling APIs that weren’t available. This basically meant making the API surface smaller and smaller for each platform you wanted to target. Especially if you decided to target a very small platform like Windows Phone.

It’s important to understand that this problem only really applies to class libraries that should be used across multiple platforms. And as such, mostly to people who produce libraries that are distributed using NuGet. The applications that consume the libraries are always platform specific, as they depend on the specific runtime to start the application. A Windows Phone app can’t run on another platform than Windows Phone, as it needs phone specific APIs to start up and render its UI. And a Unity app can’t run on any other platform than Unity, as it requires the Unity runtime to start. However, class libraries can be loaded on any platform, as they have no direct dependency a specific runtime, only to the CTS and BCL.

The different frameworks to target are specified using something called a Target Framework Moniker, or TFM. A TFM defines a specific platform and version, which in turn defines a specific set of APIs that can be used. Example of TFMs are net46, net472, netcoreapp3.1, netcore50, win8, win10, wp7 and wp81.

In PCL projects, you defined a list of TFMs that you wanted to support, and it would then compile an assembly based on a synthetic, “made-up” framework, that only included the API set that was available in all the selected frameworks.

The idea behind PCLs isn’t bad at all. However, it had some real problems. First of all, with platforms supplying a very varied set of APIs, combining a few of them made the API surface shrink to almost nothing. On top of that, it becomes quite an interesting task to figure out all the possible combinations of frameworks, and what APIs were available in each one of the combinations. A task that was needed to be able to create the “synthetic” target framework that were used. And whenever a new platform came out, a new set of potential combinations of TFM and APIs had to be worked out. And on top of that, you were not be able to target that new platform until Visual Studio had been updated to support those new combinations as well. Not to mention that library developers had to re-build and re-distribute their packages to support the added frameworks.

Note: This was also made even harder by the fact that frameworks creators cared little about the portability and focused on what they needed in their specific scenario, which ended up with very varied API sets.

In the end, PCL was a good idea, and offered library developers the ability to build libraries that could be used across multiple different platforms in a safe way. But it was too hard to figure out, and every new platform would add to the complexity, slowly making this impossible to maintain and understand. Because of this, PCL projects has now been deprecated and replaced with .NET Standard, or potentially something called multi targeting depending on what you want to accomplish.

And then we got .NET Core

With the release of .NET Core, Microsoft decided to break free of the many, many years of legacy that they had in .NET Framework. Instead, they started from scratch with a whole new .NET framework. A framework that has runtimes that allow us to run on Windows, Linux, and macOS. A framework that doesn’t have to adhere to the backwards compatibility requirements that .NET Framework had to. It’s much smaller, and is aimed at enabling the release of new features and capabilities that wasn’t possible in .NET Framework due to in-place updates causing compatibility constraints.

.NET Core moved parts of the BCL, and the app models, out of the framework and into a bunch of NuGet packages. This allows the framework to be really small, modular and flexible, as it allows for big parts of the framework to be updated using smaller packages, instead of having to do massive framework installs.

It also introduces a new way of running your application, self-contained. This means that the whole application, including runtime, the used parts of the BCL and the application can be deployed together as a unit. This means that the machine that runs the application doesn’t even need .NET Core runtime to be installed for it to be able to run the application.

However, most applications are still deployed as framework dependent. This means that they expect the runtime and a pre-defined set of assemblies to be available on the machine already. This makes the deployments much smaller, as the application only need to ship the assemblies that aren’t part of the framework. On the other hand, it requires the machine that it targets to have the runtime installed.

However, and that’s a very big “however”, the pre-installed assemblies are not stored in the GAC. Instead, they are stored in something called a shared framework, which I will get back to in a minute.

The problem with this approach of having everything as separate packages, is that you end up with a LOT of packages and a LOT of different versions to keep track of. Especially when you start looking at using something like ASP.NET Core. This requires a ton of assemblies to work, both BCL and app model specific assemblies, from both Microsoft and 3rd party developers.

To simplify this, metapackages were introduced. These are NuGet-packages that allowed the creators to create a bundle of defined libraries to include in the application. For example, the metapackage called Microsoft.AspNetCore.All included a reference to everything in the .NET Core BCL, as well as all the packages for ASP.NET Core, and all the packages they needed, such as JSON.NET. However, the package doesn’t actually contain the code. Instead, it just contains references to assemblies that are part of the .NET Core runtime install, and thus already available on the target machine.

These metapackages made it a lot easier to build your applications. Just reference that metapackage, and you got a reference to everything you needed. And since the build system knew which of the referenced assemblies were part of the runtime installation, it could just skip bundling those packages.

However, this solution for bundling libraries for ASP.NET Core had some major drawbacks. Some pretty technical drawbacks that caused a bunch of versioning problems. But I won’t go into this, as this has been resolved as of ASP.NET Core version 2.1.

As of ASP.NET Core 2.1, it became a “shared framework”. A shared framework is a set of assemblies that are deployed with the runtime and stored in a folder on the computer. This means that the computer already has a defined set of assemblies that support a specific version ASP.NET Core available, and that these don’t need to be shipped. Just as with the meta packages. But using shared frameworks also allows us to bring our own, newer versions of the assemblies. So, when newer versions are made available, your application can install them as NuGet packages and include them in the deployment. This allows us to have a minimum, compatible version of all packages pre-installed on the target machine in a shared framework, but at the same time upgrade the packages we want to upgrade by adding a newer version as a NuGet package.

There are a couple of different shared frameworks available.

  • Microsoft.NETCore.App - contains the BCL/.NET Standard stuff, and is included in all .NET Core applications and libraries.
  • Microsoft.AspNetCore.App – contains most of ASP.NET Core as well as for example Entity Framework Core, and references Microsoft.NETCore.App.
  • Microsoft.AspNetCore.All – references Microsoft.AspNetCore.App and adds some 3rd party things like some Azure stuff, and some Redis and SQLite support etc
  • Microsoft.WindowsDesktop.App – contains WPF and WinForms support, and references Microsoft.NETCore.App

Note: You don’t see the shared frameworks being referenced in your csproj-file. Instead, the project file defines an SDK, which in turn references the shared framework package that it needs. For example, the Microsoft.NET.Sdk SDK references the Microsoft.NETCore.App package, and the Microsoft.NET.Sdk.Web SDK references the Microsoft.AspNetCore.App package.

Tip: If you want to know more about shared frameworks and the innards of this stuff. Have a look at https://natemcmaster.com/blog/2018/08/29/netcore-primitives-2/.

Other parts of the .NET Core framework have simply been moved to a completely external NuGet-based model instead. This allows these parts to evolve on their own and use their own release cadence, and because of this innovate more rapidly.

However, .NET Core is really “just” another .NET implementation, which means yet another runtime and BCL to consider. Even if that BCL is spread across NuGet packages and shared frameworks and what not. That in turn means another set of TFMs (netcoreappX.Y) to take into consideration when building cross platform libraries. Something that would have made the PCL situation even worse if it hadn’t been deprecated.

However, with .NET Core breaking away from .NET Framework, and being the future of .NET development, it was also time to try and break free from the PCL problem. And the solution, at least for the time being, was .NET Standard.

.NET Standard

.NET Standard is based around the idea of turning “everything” upside down. Instead of having the library developer figure out what APIs are available on the target platforms, the platform agrees to implement a defined set of APIs. This allows the library developers to target a defined API set, a .NET Standard version, and then the library will work on any framework that supports that version of .NET Standard. This allows new frameworks to be added, and have existing libraries automatically work on them, since the library is only depend on the platform implementing a defined set of APIs, not the specific framework, or TFM.

So, .NET Standard is really just a very well-groomed list of APIs that is versioned and uses its own TFM (netstandardX.Y). And each .NET Standard version extends the previous versions API set, making easy to understand what you get.

All frameworks can then declare what .NET Standard version they support, and automatically make any library that targets that version, or a lower version, work. Making life a LOT easier. Just pick the lowest .NET Standard version you can work with, and the eco system will automatically figure out where it can be used. Or figure out what .NET Standard version is supported on the platforms you want to target, and make sure that you target the lowest common denominator.

However, even if .NET Standard makes life a lot easier than PCL, it is still not perfect. It still highlights the fact that the .NET world is fragmented…

.NET 5

This is where .NET 5 comes into play. It aims to remove a lot of these complexities by aligning all the different frameworks to support a common set of APIs. This makes it a lot easier to build cross-platform libraries, as all frameworks of a specific version supports the same API set.

However, it does introduce yet another set of TFMs, all starting with net5.0. Targeting net5.0 will allow your library to run on any platform that supports .NET 5.0 (or above). And to be honest, .NET 5 is almost like a continuation of .NET Standard, as the net5.0 TFM defines the API-set that is to be supported by all platforms that support .NET 5.

However, there are also platform specific TFMs that allow you to use platform specific APIs. For example, Android APIs are available in net5.0-android and iOS specific APIs are in net5.0-ios. This makes it very clear what you are targeting. If you omit the platform specific extension to the TFM, your library can be used by any application running .NET 5.0 or above. But if you add it, you get those extra APIs for that specific platform, but at the same time you limit it to be used on that platform.

Note: The Android and iOS stuff won’t actually be in the .NET 5.0 release. It will be added later on due to time constraints. But you get the point… The platform specific stuff will be in their own TFM:s.

So where does that leave .NET Standard? Well, .NET Standard will still be there in the future. But only for backwards compatibility. And if you are only targeting solutions that are using .NET 5.0 or later, you might as well just target net5.0 and ignore .NET Standard. But…if you do need to support older applications running on for example .NET Framework, the solution is to target .NET Standard. This allows it to be used by older frameworks, as well as newer ones like .NET 5, giving you a broader audience.

For example, if you want to enable your library to be used on .NET Framework 4.8, you have to target netstandard2.0, as this is the last .NET Standard version that .NET Framework supports. And, for now, if you can build what you want with the APIs in netstandard2.0, the reach for your library will be wider as it enables usage on more platforms. And for the foreseeable future, it is very likely that library developers will keep targeting .NET Standard 2.0 to be able to reach developers using .NET Framework. But over time, the number of applications targeting older frameworks will get smaller and smaller, and hopefully we will be able to switch over to the easier, unified __netX.Y[-platform] __ TFMs completely.

Conclusion

I hope that this post makes it a little clearer how things fits together. I now there is a LOT to take in, and the post covers a lot of things. And on top of that, it isn’t really a simple topic, and it has a lot of nooks and crannies. But I still felt that it was worth covering as I have gotten lots of questions over the last few years about how all of this fits together. Especially about how .NET Core and .NET Standard works. Something that gets even harder to grasp for a lot of developers with the release of .NET 5.

Luckily, we can ignore most of these things for a lot of our development as it “just works”. But it is still good to know how it all fits together. Understanding things like what a TFM is, and what .NET Standard is and why it is there, can make it a little easier to figure out some things that might happen when upgrading applications for example. Or even when trying to figure out what TFM to target for your next library.

Hopefully though, all the work that is being put into .NET 5, and beyond, will make, at least some of this, go away. Having all the frameworks under the .NET 5 umbrella means that they all have a unified API set that we know is always available. And when we need a platform specific functionality, it is easy to target the platform specific TFM, which then clearly means that your library will only run on that platform.

Yes, we will still need to take few different frameworks into account as we wait for the world to update to the latest and greatest. And it might take a lot of time before .NET Framework can be ignored, at least by those working in the “enterprise” world where the cogs move a little slower.

So, for now, you should try and target netstandard2.0 for your libraries if possible. This will make it run on a very big part of the .NET ecosystem, including .NET Framework, .NET Core and .NET 5. And once the world has been updated, we should be able to move over and target net5.0, and have our code run everywhere!

However, it is also important to understand that the work being done for the .NET 5.0 release is mostly under the hood stuff to align the different frameworks, and mostly impacts library authors. This, to me at least, makes the .NET 5 release a little bit boring. Especially if you were hoping for a bunch of cool new stuff. Having that said, we do get Blazor WebAssembly in .NET 5, and that looks pretty cool… But other than that, it seems to be mostly a lot of platform alignment for .NET 5 and probably 5.1 release. But keep in mind that this is all work that takes us towards a future where everything .NET just works, no matter what platform you want to work with.

And as usual, if you have any questions, feel free to ping me on Twitter at @ZeroKoll!

zerokoll

Chris

Developer-Badass-as-a-Service at your service