But, many are (finally) realizing that it’s not for everyone. Most of us aren’t at the scale of Netflix, LinkedIn, etc. and therefore don’t have the organizational manpower to offset the overhead of a full-blown microservices architecture.
An alternative to a full-blown microservices architecture that’s been getting a lot of press lately is called “modular monoliths.”
Shouldn’t well-written monoliths be modular anyways?
Sure. But modular monoliths are done in a very intentional way that usually follows a domain-driven approach.
This is part of the 2019 C# Advent! Take a look at all the other awesome articles for this year!
Here are the main sections in the article in case you would like to skip certain parts:
The GitHub repo for everything in this article is here.
The biggest benefits that microservices give us (loose coupling, code ownership, etc.) can be had in a well-designed modular monolith. By leveraging the domain-driven concept of bounded contexts, we can treat each context as an isolated application.
But, instead of hosting each context as an independent process (like with microservices), we can extract each bounded context as a module within a larger system or web of modules.
For example, each module might be a .NET project/assembly. These assemblies would be combined at run-time and hosted within one main process.
Each module would own everything from UI components, back-end business/domain logic to persistence.
Compared to microservices, the benefits of modular monoliths include:
I see modular monoliths as a step within the potential evolution of a system’s architecture:
For domain-driven approaches, starting with a modular monolith might make the most sense and is definitely worth considering.
Here’s a more in-depth primer on modular monoliths by Kamil Grzybek if you’re interested.
There are many ways to build modular monoliths!
For example, Kamil Grzybek has created a production-ready modular monolith sample.
I personally prefer less separation within each bounded context / module, but his example is super detailed and worth looking at!
And, of course, Kamil says it well in his repo’s disclaimer:
The architecture and implementation presented in this repository is one of the many ways to solve some problems
In this article, we’ll be looking at a much simpler approach to building modular monoliths. Again, the direction and implementation depends on your needs (business requirements, team experience, time-to-market required, etc.)
We’ll be looking at one way that’s unique to .NET Core by using a new-ish feature of .NET Core called razor class libraries. Razor class libraries allow you to build entire UI pages, controllers and components inside of a sharable library! It’s the approach I’ve used for Coravel Pro and enables some exciting possibilities.
I recently wrote an article about using some domain-driven approaches to think about and re-design an insurance selling platform I once worked on.
To keep things simple for now, let’s imagine we’ve determined two bounded contexts from this domain:
The specific details are not so important since the remainder of this article will look at implementation and code.
The structure of our solution will roughly look like the following - with a .NET Core web application as the host:
Let’s implement the skeleton for our modular monolith and use razor class libraries as a way to implement our bounded contexts.
First, create a new root host process:
dotnet new webapp -o HostApp
Next, we’ll create our two modules as razor class libraries:
dotnet new razorclasslib -o Modules/InsuranceApplication
dotnet new razorclasslib -o Modules/MedicalQuestions
Then, we’ll reference our modules from the host project.
From within the host project:
dotnet add reference ../Modules/InsuranceApplication/InsuranceApplication.csproj
dotnet add reference ../Modules/MedicalQuestions/MedicalQuestions.csproj
Navigate to the Insurance Application module at Modules/InsuranceApplication
.
Your razor class library will have some sample Blazor files, www
folder, etc. You can remove all those generated files.
Let’s build a couple of Razor Pages that will be used as this bounded context’s UI:
dotnet new page -o Areas/InsuranceApplication/Pages -n ContactInfo
dotnet new page -o Areas/InsuranceApplication/Pages -n InsuranceSelection
You might have to go into your razor pages and adjust the generated namespaces. In my example, I changed them to
InsuranceApplication.Areas.InsuranceApplication.Pages
.
Navigate back to the HostApp
web project and try to build it.
It will fail!
Our razor class libraries are trying to use razor pages - which is a web function. This requires referencing the appropriate reference assemblies.
Before .NET Core 3.0, we would have added a reference to some extra NuGet packages like Microsoft.AspNetCore.Mvc
, etc. As of .NET Core 3.0, many of these ASP related packages are actually included in the .NET Core SDK.
For some projects, this breaking change can cause issues and confusion! For more details, check out Andrew Lock’s in-depth look at this issue.
So, let’s change the project files of our two razor class libraries to the following:
<Project Sdk="Microsoft.NET.Sdk.Razor"> <PropertyGroup> <TargetFramework>netcoreapp3.0</TargetFramework> <AddRazorSupportForMvc>True</AddRazorSupportForMvc> </PropertyGroup> <ItemGroup> <FrameworkReference Include="Microsoft.AspNetCore.App" /> </ItemGroup></Project>
Yes, I removed all the boilerplate Blazor code since the appropriate references are included in the
FrameworkReference
.
Go into the razor pages you created and add some dummy HTML.
For example:
@page@model InsuranceApplication.Areas.InsuranceApplication.Pages.ContactInfoModel@{}<h1>Insurance Contact Info</h1>
Try running your host application and navigate to /InsuranceApplication/ContactInfo
and /InsuranceApplication/InsuranceSelection
.
Both pages should display. Cool!
Let’s look at building out a basic flow between our two modules.
First, in the HostApp
project, in your Pages/Index.cshtml
file add the following HTML:
<a href="/InsuranceApplication/ContactInfo">Begin Your Application!</a>
That will give us a link to click from our home screen to begin the flow through our application logic.
In your ContactInfo.cshtml
file within the InsuranceApplication
project, insert the following:
@page@model InsuranceApplication.Areas.InsuranceApplication.Pages.ContactInfoModel@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers<h1>Insurance Contact Info</h1><form method="post"> <label>Email:</label> <input asp-for="EmailAddress" required type="email" /> <button type="submit">submit</button></form>
In ContactInfo.cshtml.cs
, add the following:
using System;using System.Collections.Generic;using System.Linq;using System.Threading.Tasks;using Microsoft.AspNetCore.Mvc;using Microsoft.AspNetCore.Mvc.RazorPages;namespace InsuranceApplication.Areas.InsuranceApplication.Pages{ public class ContactInfoModel : PageModel { [BindProperty] public string EmailAddress { get; set; } public IActionResult OnPostAsync() { Console.WriteLine($"Email Address is {this.EmailAddress}."); return RedirectToPage("./InsuranceSelection"); } }}
This will allow us to enter an email address and move to the next page in our flow.
In the InsuranceSelection.cshtml
page - just add a link to keep things simple:
<a href="/MedicalQuestionnaire/Questions">Next<a>
You can imagine that this page would allow the user to select a specific insurance plan they want to apply for.
You might have noticed that we never created any razor pages in the Medical Questionnaire module. Let’s do that now.
Navigate to the root of the MedicalQuestions
project and enter the following from your terminal:
dotnet new page -o Areas/MedicalQuestionnaire/Pages -n Questions
Again, you might need to adjust the generated namespaces.
Within the Questions.cshtml
file, add a link:
<h1>Medical Questionnaire</h1><a href="/">Finish<a>
Try running your host project with dotnet run
and run through the entire UI!
When working with these kinds of loosely coupled modules a question arises:
What happens when we need to display information from multiple bounded contexts on the same screen?
Usually, we resort to building composite UIs.
These are UIs where various parts of the UI are rendered and controlled by components owned by a specific bounded context, service or module.
As you can see in the image above, each component might be owned by a different back-end module and would be isolated from and loosely coupled to the other modules.
Let’s build a simple composite UI using some exciting new .NET Core 3.0 technologies!
We need to configure our host application to support the new blazor/razor components.
Pages/Shared/_Layout.cshtml
:<script src="_framework/blazor.server.js"></script>
Startup.cs
in ConfigureServices
add:services.AddServerSideBlazor();
Configure
method:app.UseEndpoints(endpoints =>{ endpoints.MapRazorPages(); endpoints.MapBlazorHub(); // Add this one.});
Navigate to the InsuranceApplication
project’s root directory and execute the following:
dotnet new razorcomponent -o ./Components -n ApplicationDashboard
Replace the contents of your new razor component with the following:
@using Microsoft.AspNetCore.Components.Web<h3>Application Dashboard</h3><p> Time: @currentTime</p><button @onclick="Update">Update</button>@code { private string currentTime = DateTime.Now.ToString(); private void Update() { currentTime = DateTime.Now.ToString(); }}
Do the same steps within the MedicalQuestions
module to create a dummy razor component (but change the title of the component).
Now, back in your host application project, within the Pages/Index.cshtml
page, add the following to the top of the file:
@page@using InsuranceApplication.Components // Add this.@using MedicalQuestions.Components // Add this.@model IndexModel
Then, in the middle of your HTML page, add:
<div class="row"> <div class="col-6"> @(await Html.RenderComponentAsync<ApplicationDashboard>(RenderMode.ServerPrerendered)) </div> <div class="col-6"> @(await Html.RenderComponentAsync<QuestionsDashboard>(RenderMode.ServerPrerendered)) </div></div>
Finally… start your host project using dotnet run
!
Any time one of the components needs to be changed, the host application will (most likely) not need to change. Both bounded contexts are still isolated from each other but our architecture allows us to be smart about how we display that information.
Note: There are other ways to build these types of components: view components, tag helpers, tag helper components, javascript components, etc.
Modules can be deployed as NuGet packages so that they ultimately can be managed by independent teams and be isolated as true modules.
Each bounded context can:
In terms of service/module interaction, any calls between modules could use a shared abstractions library which would be configured in DI at run-time to use the concrete service from the appropriate bounded context:
It doesn’t show on the diagram, but the Medical Questions context might need to get some information from the Insurance Application context in some scenarios where using messaging isn’t appropriate.
In this case, the Medical Questions code would use the IInsuranceApplicationService
interface (from DI) to make those calls.
Then, at run-time, the host process would configure the concrete implementation (InsuranceApplicationFacade
) to be given to anyone who asks for the interface.
This technique keeps each module loosely coupled and enforces a strict contract in terms of what one bounded context can be explicitly be “asked for”.
However, some might say the need to make direct calls to another bounded context is a sign that the boundaries are incorrect…a topic for another day.
I hope you found this article helpful and informative. I’m sure you can see that razor class libraries are a really exciting feature of .NET Core that hasn’t been talked about too much.
This is one way to use them and for some cases it might work really well!
Here are some resources about the topics we covered:
Check out my book about keeping your code healthy!
I’ll be writing new content over at my new site/blog. Check it out!
Don’t forget to connect with me on:
]]>This was one of my biggest professional accomplishments given the many technical and political constraints. But that’s a whole other topic for another day.
This was before I understood domain-driven design. Since then, I’ve wondered: what this might have looked like using a domain-driven approach?
Here are a few points we’ll cover:
Event storming is a workshop that is usually held to discover and learn about how a business works. You invite all relevant stakeholders and use sticky notes to discover what domain events occur, and what impact or additional processes they might lead to.
I’ve also found event storming to be helpful even in smaller settings and within smaller team discussions or design discussions.
For more, see the book by its creator on leanpub.
In this article, I’ll be using a simple version of event storming so we can focus on the important points.
This system was designed for (a) the main insurance provider and (b) 3rd party insurance providers to allow their customers to apply for life insurance online. It was configurable in that 3rd party providers could choose which parts of the general application process would be included, where the entry point would be, and other customizations.
Generally speaking, the process went like this when dealing with applications on behalf of 3rd party providers:
Accept some HTTP POST data from an initial form filled out by a third-party provider’s customer.
The user continues to fill out the online life insurance application (with multiple steps).
Once done, the user would provide their banking information.
It would then process the application by calling a remote API (synchronously - while the user waited for the UI to refresh).
Finally, it would display whether the application was accepted or not (with the next steps for the user to take).
Everything was done in one massive code-base. All the web UI and business logic were all together.
Some of the problems with this approach were:
Let’s start to look at the business flow in terms of the events that occur in the domain to give us an overview of what we’re dealing with.
Keep in mind that this is not an exhaustive look at all the domain events, but a very simplified look:
PersonalInformationProvided
is considered a domain event because other stuff happens in response.
For example, the insurance company insisted that the system send an e-mail with a token so the proposed insured (i.e. the applicant) could resume their application at a later date.
The results of MedicalQuestionsProvided
may lead to the disqualification of the applicant, disqualification of the selected insurance product, enable an entirely different path throughout the rest of the form, etc. These would be domain events too, but they aren’t shown for simplicity’s sake.
After the application is completed, we get into the next steps:
As you can see, this is where things get interesting.
Right after BankingInfoProvided
the system would remotely call the insurance company’s “special” API that would look at past applicant history, banking history, medical history, etc. and make a decision. This decision was returned via the same HTTP POST (🙃).
In some cases, the proposed insured had to undergo a physical medical examination at a later date. Their application was kept in a pending state.
If the application passed, then then the insurance policy would be created (PolicyCreated
). Otherwise, a failure just meant there was no policy created and everyone could be on their merry way.
Upon the first payment of the policy (FirstPolicyPaymentReceived
), the policy would “activate.” If a policy was not activated within the first 30 days (or some arbitrary duration), it would be immediately cancelled.
In the design of the existing system, there was no concept of bounded contexts. Everything was stored in one massive XML file (yes…the things we have to deal with in outdated industries).
We know that we need to split this up though. How should we?
One of the kinds of events to watch out for is what Alberto Brandolini (the creator of event storming) calls “pivotal events.” These are the most important events because they are drivers for major transitions within a domain.
In this scenario, the most important event is PolicyCreated
. This is when the application has officially transitioned into a real insurance policy. That’s the end goal of what the user wants in the first place.
Interestingly, this is also where the user-facing web application ended and the backend office policy management began.
Another interesting area (that you might not be aware of unless you’re familiar with the industry) is that the medical questionnaire is very complex. Depending on how you answer certain questions, many different things could happen.
The code and business logic around this specific area is a hotspot.
Because of the complexity contained within this area, I’d be interested in exploring this area as a sub-domain and treating it as a bounded context.
Sub-domains are not the same as bounded contexts. Sub-domains are still business related divisions, whereas bounded contexts are about what boundaries our software has. Many times, though, they do match - especially at the beginning of modelling, like we are doing. But, if the business structure changes then the sub-domains might change (with the business) while the bounded contexts are still baked into the software.
Definitely, from the business’ perspective, the MedicalQuestionsProvided
is a pivotal event. It’s the main driver for whether a proposed insured is eligible for coverage or qualifies for additional “bonus” insurance add-on products.
In the existing system, this part of the system was the hardest to build and maintain!
This is where the main concept of bounded contexts shines - you can draw a bubble around a specific area or problem space that’s complex and keep that complexity isolated. No one else needs to know how it works. It just needs to know what happens at the end of the questionnaire.
Keeping in mind though that we can’t just create bounded contexts wherever we want. In this case, we’ve identified (what seems to be for the moment) a pivotal event.
Also, if we dig deeper with domain experts, we would find that the (ubiquitous) language used in context of the medical questions is very specific to the medical questionnaire.
For example, in this context, when speaking about the applicant, it’s in terms of their health, physical well-being, etc.
This is the only place in the entire domain where this language is used about the applicant.
What you’ll notice is that the flow of data is from one context, into another, and then back out into the same one again.
Usually, in typical DDD examples, you see data flow from one context into another and the flow never returns to the original context again.
My gut feeling is that these kinds of complex sub-domains that are smack in the middle of some over-arching business process or other parent domains are common in real-world domains.
One of the major issues with the existing system was that it would issue an HTTP POST directly from the web application to the insurance provider’s “special” API that would approve or decline an application.
Again, this easily leads to issues like:
Since this was a white-label product, that API would not necessarily be controlled by the insurance provider who we were building this for. So, we had no control over the performance and availability of that API.
How would I do this today?
To re-cap more clearly, here are the steps needed (barebones):
This type of long-running job/process is usually best done using the saga pattern.
Other considerations when dealing with these type of long-running jobs might be the routing slip pattern or the process manager pattern - both of which are similar to the saga pattern.
With the saga pattern, here’s roughly what I would envision:
This looks a lot more complicated, but please bear with me for a moment.
Notice that instead of issuing the HTTP POST “plain-and-simple”, the BankingInfoProvided
event will kick-off a long-running background job. Specifically, the ApplicationSubmissionSaga
.
This saga has two handlers:
Why? What does this give us?
In the original design (direct HTTP POST) - what happens if the external API is not even running?
Oh…
I guess the user is out of luck. They can’t even submit their application…
What if this happens when using the saga pattern?
The saga would fail, then go to sleep and re-try at a later date (since it is a background process). If the external system comes back up the next day then the saga will succeed in submitting the application and continue!
Note: This re-try process is indicated in the diagram above by the orange gear on the SubmitApplicationForApprovalStep
.
This way of dealing with a distributed transaction or long-running business process (even if the external API isn’t owned by you) helps you to build systems that can fail and are expected to fail well.
Let’s compare the two designs in terms of impact on UX. Yes, this difference in modelling impacts UX in a huge way.
With the naïve original design, if the external system is down, the user cannot complete his application. So… the user will have to come back to the website at a later date and “try again!”
What about with the saga?
Here’s where using this pattern shines: the user will “complete” his/her application and we will tell them, “You will receive an e-mail within the next 24 hours once we’ve finished processing your application.”
The user goes home and eventually, at some point in the near future, will get an e-mail telling them everything they need to know.
Note: In the diagram above, you’ll see that I’ve added the commands around this part of the process.
But, what if the external system is down now?
The user doesn’t know. They’re gone. We’ve designed our system such that the user isn’t needed for us to communicate with the external API and deal with failures.
What is the user’s experience?
Awesome.
The implications of this are huge. This is the difference between systems that are easy to use and annoying and difficult to use.
This can affect the branding and reputation of the insurance company.
If users can’t even complete their application because they need to have their web browser open so that our system can communicate with the insurance providers API… that’s crazy. Really bad.
This comes back to the idea of modelling your domain well. This is all modelling. We haven’t written any code, but conceptually we can know what would happen with both designs.
This is why it’s important for companies who want to succeed and give their users the best experience possible to hire developers and engineers who are good at modelling business domains and processes like this.
It literally could be the difference between a company’s success and failure!
And it’s one look at why learning to model with these tools is important as a developer/engineer.
Everything is fine and dandy now. Except: they do not like the fact that the user cannot get immediate feedback of the application’s approval status on the web site.
Yes, the applicant will get an e-mail.
But, the stakeholders also want to keep the web site able to display the results to the customer.
What do we do? Say no?
Our new design means that the results of the application approval/denial is a background process now - it’s disconnected from our web application.
Here’s one solution you might have thought about: the web application can also subscribe to the ApplicationAccepted
, ApplicationDenied
and ApplicationPending
events and use web-sockets (using SignalR, etc.) to “push” the results back to the user’s browser.
This might even include showing a browser notification (even though we all hate them)?
Either way, it’s one way to satisfy all the requirements we have so far:
This was a somewhat primitive look at this domain. There’s so much more to it.
Hopefully you did learn something about modelling business processes though. Sometimes, just by changing how we process one step in a business flow we can make a huge impact!
If enough people enjoy this and provide feedback then I might just dig into some more specific problems and do some more intense modelling 😉.
I’ll be writing new content over at my new site/blog. Check it out!
]]>Did you know that the inventor of the concept of “null” has called this his “Billion Dollar Mistake!”
As simple as it seems, once you get into larger projects and codebases you’ll inevitably find some code that goes “off the deep end” in its use of nulls.
Note: This is an excerpt from my book Refactoring TypeScript: Keeping Your Code Healthy.
Sometimes, we desire to simply make a property of an object optional:
class Product{ public id: number; public title: string; public description: string;}
In TypeScript, a string
property can be assigned the value null
.
But… so can a number
property!
const chocolate: Product = new Product();chocolate.id = null;chocolate.description = null;
Hmmm….
That doesn’t look so bad at first glance.
But, it can lead to the possibility of doing something like this:
const chocolate: Product = new Product(null, null, null);
What’s wrong with that? Well, it allows your code (in this case, the Product
class) to get into an inconsistent state.
Does it ever make sense to have a Product
in your system that has no id
? Probably not.
Ideally, as soon as you create your Product
it should have an id
.
So… what happens in other places that have to deal with logic around dealing with Products?
Here’s the sad truth:
let title: string;if(product != null) { if(product.id != null) { if(product.title != null) { title = product.title; } else { title = "N/A"; } } else { title = "N/A" }} else { title = "N/A"}
Is that even real code someone would write?
Yes.
Let’s look at why this code is unhealthy and considered a “code smell” before we look at some techniques to fix it.
This code is hard to read and understand. Therefore, it’s very prone to bugs when changed.
I think we can agree that having code like this scattered in your app is not ideal. Especially when this kind of code is inside the important and critical parts of your application!
As a relevant side note, someone might raise the fact that TypeScript supports non-nullable types.
This allows you to add a special flag to your compilation options and will prevent, by default, any variables to allow null
as a value.
A few points about this argument:
Most of us are dealing with existing codebases that would take tons of work and time to fix these compilation errors.
Without testing the code well, and carefully avoiding assumptions, we could still potentially cause run-time errors by these changes.
This article (taken from my book) teaches you about solutions that can be applied to other languages - which may not have this option available.
Either way, it’s always safer to apply smaller more targeted improvements to our code. Again, this allows us to make sure the system still behaves the same and avoids introducing a large amount of risk when making these improvements.
Imagine you work for a company that writes software for dealing with legal cases.
As you are working on a feature, you discover some code:
const legalCases: LegalCase[] = await fetchCasesFromAPI();for (const legalCase of legalCases) { if(legalCase.documents != null) { uploadDocuments(legalCase.documents); }}
Remember that we should be wary of null checks? What if some other part of the code forgot to check for a null
array?
The Null Object Pattern can help: you create an object that represents an “empty” or null
object.
Let’s look at the fetchCasesFromAPI()
method. We’ll apply a version of this pattern that’s a very common practice in JavaScript and TypeScript when dealing with arrays:
const fetchCasesFromAPI = async function() { const legalCases: LegalCase[] = await $http.get('legal-cases/'); for (const legalCase of legalCases) { // Null Object Pattern legalCase.documents = legalCase.documents || []; } return legalCases;}
Instead of leaving empty arrays/collections as null
, we are assigning it an actual empty array.
Now, no one else will need to make a null check!
But… what about the entire legal case collection itself? What if the API returns null
?
const fetchCasesFromAPI = async function() { const legalCasesFromAPI: LegalCase[] = await $http.get('legal-cases/'); // Null Object Pattern const legalCases = legalCasesFromAPI || []; for (const case of legalCases) { // Null Object Pattern case.documents = case.documents || []; } return legalCases;}
Cool!
Now we’ve made sure that everyone who uses this method does not need to be worried about checking for nulls.
Other languages like C#, Java, etc. won’t allow you to assign a mere empty array to a collection due to rules around strong typing (i.e. []
).
In those cases, you can use something like this version of the Null Object Pattern:
class EmptyArray<T> { static create<T>() { return new Array<T>() }}// Use it like this:const myEmptyArray: string[] = EmptyArray.create<string>();
Imagine that you are working on a video game. In it, some levels might have a boss.
When checking if the current level has a boss, you might see something like this:
if(currentLevel.boss != null) { currentLevel.boss.fight(player);}
We might find other places that do this null check:
if(currentLevel.boss != null) { currentLevel.completed = currentLevel.boss.isDead();}
If we introduce a null object, then we can simply remove all these null checks.
First, we need an interface to represent our Boss
:
interface IBoss { fight(player: Player); isDead();}
Then, we can create our concrete boss class:
class Boss implements IBoss { fight(player: Player) { // Do some logic and return a bool. } isDead() { // Return whether boss is dead depending on how the fight went. }}
Next, we’ll create an implementation of the IBoss
interface that represents a “null” Boss
:
class NullBoss implements IBoss { fight(player: Player) { // Player always wins. } isDead() { return true; }}
The NullBoss
will automatically allow the player to “win”, and we can remove all our null checks!
In the following code example, if the boss is an instance of NullBoss
or Boss
there are no extra checks to be made.
currentLevel.boss.fight(player);currentLevel.completed = currentLevel.boss.isDead();
Note: This section in the book contains more techniques to attack this code smell!
This post was an excerpt from Refactoring TypeScript which is designed as an approachable and practical tool to help developers get better at building quality software.
Don’t forget to connect with me on twitter or LinkedIn!
It’s deceptively simple.
Note: This is an excerpt from my book Refactoring TypeScript: Keeping Your Code Healthy.
Take this code, for example:
const email: string = user.email;if(email !== null && email !== "") { // Do something with the email.}
Notice that we are handling the email’s raw data?
Or, consider this:
const firstname = user.firstname || "";const lastname = user.lastname || "";const fullName: string = firstname + " " + lastname;
Notice all that extra checking around making sure the user’s names are not null
? You’ve seen code like this, no doubt.
What’s wrong with this code? There are a few things to think about:
That logic is not sharable and therefore will be duplicated all over the place
In more complex scenarios, it’s hard to see what the underlying business concept represents (which leads to code that’s hard to understand)
If there is an underlying business concept, it’s implicit, not explicit
The business concept in the code example above is something like a user’s display name or full name.
However, that concept only exists temporarily in a variable that just happened to be named correctly. Will it be named the same thing in other places? If you have other developers on your team - probably not.
We have code that’s potentially hard to grasp from a business perspective, hard to understand in complex scenarios and is not sharable to other places in your application.
How can we deal with this?
Primitive types should be the building blocks out of which we create more useful business-oriented concepts/abstractions in our code.
This helps each specific business concept to have all of its logic in one place (which means we can share it and reason about it much easier), implement more robust error handling, reduce bugs, etc.
I want to look at the most common cause of primitive overuse that I’ve experienced. I see it all the time.
Imagine we’re working on a web application that helps clients to sell their used items online.
We’ve been asked to add some extra rules around the part of our system that authenticates users.
Right now, the system only checks if a user was successfully authenticated.
const isAuthenticated: boolean = await userIsAuthenticated(username, password);if(isAuthenticated) { redirectToUserDashboard();} else { returnErrorOnLoginPage("Credentials are not valid.");}
Our company now wants us to check if users are active. Inactive users will not be able to log in.
Many developers will do something like this:
const user: User = await userIsAuthenticated(username, password);const isAuthenticated: boolean = user !== null;if(isAuthenticated) { if(user.isActive) { redirectToUserDashboard(); } else { returnErrorOnLoginPage("User is not active."); }} else { returnErrorOnLoginPage("Credentials are not valid.");}
Oh no. We’ve introduced code smells that we know are going to cause maintainability issues!
We’ve got some null checks and nested conditions in there now (which are both signs of unhealthy code that are addressed in the Refactoring TypeScript book.)
So, let’s refactor that first by applying (a) the special case pattern and (b) guard clauses (both of these techniques are explained at length in the book too.)
// This will now always return a User, but it may be a special case type// of User that will return false for "user.isAuthenticated()", etc.const user: User = await userIsAuthenticated(username, password);// We've created guard clauses here.if(!user.isAuthenticated()) { returnErrorOnLoginPage("Credentials are not valid.");}if(!user.isActive()) { returnErrorOnLoginPage("User is not active.");}redirectToUserDashboard();
Much better.
Now that your managers have seen how fast you were able to add that new business rule, they have a few more they need.
If the user’s session already exists, then send the user to a special home page.
If the user has locked their account due to too many login attempts, send them to a special page.
If this is a user’s first login, then send them to a special welcome page.
Yikes!
If you’ve been in the industry for a few years, then you know how common this is!
At first glance, we might do something naïve:
// This will now always return a User, but it may be a special case type// of User that will return false for "user.isAuthenticated()", etc.const user: User = await userIsAuthenticated(username, password);// We've created guard clauses here.if(!user.isAuthenticated()) { returnErrorOnLoginPage("Credentials are not valid.");}if(!user.isActive()) { returnErrorOnLoginPage("User is not active.");}if(user.alreadyHadSession()) { redirectToHomePage();}if(user.isLockedOut()) { redirectToUserLockedOutPage();}if(user.isFirstLogin()) { redirectToWelcomePage();}redirectToUserDashboard();
Notice that because we introduced guard clauses, it’s much easier to add new logic here? That’s one of the awesome benefits of making your code high-quality - it leads to future changes being much easier to change and add new logic to.
But, in this case, there’s an issue. Can you spot it?
Our User
class is becoming a dumping ground for all our authentication logic.
Is it that bad? Yep.
Think about it: what other places in your app will need this data? Nowhere - it’s all authentication logic.
One refactoring would be to create a new class called AuthenticatedUser
and put only authentication-related logic in that class.
This would follow the Single Responsibility Principle.
But, there’s a much simpler fix we could make for this specific scenario.
Any time I see this pattern (the result of a method is a boolean or is an object that has booleans which are checked/tested immediately), it’s a much better practice to replace the booleans with an enum.
From our last code snippet above, let’s change the method userIsAuthenticated
to something that more accurately describes what we are trying to do: tryAuthenticateUser
.
And, instead of returning either a boolean
or a User
- we’ll send back an enum that tells us exactly what the results were (since that’s all we are interested in knowing).
enum AuthenticationResult { InvalidCredentials, UserIsNotActive, HasExistingSession, IsLockedOut, IsFirstLogin, Successful}
There’s our new enum that will specify all the possible results from attempting to authenticate a user.
Next, we’ll use that enum:
const result: AuthenticationResult = await tryAuthenticateUser(username, password);if(result === AuthenticationResult.InvalidCredentials) { returnErrorOnLoginPage("Credentials are not valid.");}if(result === AuthenticationResult.UserIsNotActive) { returnErrorOnLoginPage("User is not active.");}if(result === AuthenticationResult.HasExistingSession) { redirectToHomePage();}if(result === AuthenticationResult.IsLockedOut) { redirectToUserLockedOutPage();}if(result === AuthenticationResult.IsFirstLogin) { redirectToWelcomePage();}redirectToUserDashboard();
Notice how much more readable that is? And, we aren’t polluting our User
class anymore with a bunch of extra data that is unnecessary!
We are returning one value. This is a great way to simplify your code.
This is one of my favorite refactorings! I hope you will find it useful too.
Whenever I use this refactoring, I know automatically that the strategy pattern may help us some more.
Imagine the code above had lots more business rules and paths.
We can further simplify it by using a form of the strategy pattern:
const strategies: any = [];strategies[AuthenticationResult.InvalidCredentials] = () => returnErrorOnLoginPage("Credentials are not valid.");strategies[AuthenticationResult.UserIsNotActive] = () => returnErrorOnLoginPage("User is not active.");strategies[AuthenticationResult.HasExistingSession] = () => redirectToHomePage();strategies[AuthenticationResult.IsLockedOut] = () => redirectToUserLockedOutPage();strategies[AuthenticationResult.IsFirstLogin] = () => redirectToWelcomePage();strategies[AuthenticationResult.Successful] = () => redirectToUserDashboard();strategies[result]();
This post was an excerpt from Refactoring TypeScript which is designed as an approachable and practical tool to help developers get better at building quality software.
Don’t forget to connect with me on twitter or LinkedIn!
Let’s look at some reasons for why you should learn about when and how to refactor effectively.
Note: This is an excerpt from my book Refactoring TypeScript: Keeping Your Code Healthy.
Ever work in a software system where your business asked you to build a new feature - but once you started digging into the existing code you discovered that it’s not going to be so easy to implement?
Many times, this is because our existing code is not flexible enough to handle new behaviors the business wants to include in your application.
Why?
Well, sometimes we take shortcuts and hack stuff in.
Perhaps we don’t have the knowledge and skills to know how to write healthy code?
Other times, timelines need to be met at the cost of introducing these shortcuts.
This is why refactoring is so important:
Refactoring can help your currently restrictive code to become flexible and able/easy to extend once again.
It’s just like a garden. Sometimes you just need to get rid of the weeds!
I’ve been in a software project where adding a checkbox to a screen was not possible given the system’s set up at the time! Adding a button to a screen took days to figure out! And this was as a senior developer with a good number of years under my belt. Sadly, some systems are just very convoluted and hacked together.
This is what happens when we don’t keep our code healthy!
It’s a practical reality that you need to meet deadlines and get a functioning product out to customers. This could mean having to take shortcuts from time-to-time, depending on the context. Bringing value to your customers is what makes money for your company after-all.
Long-term, however, these “quick-fixes” or shortcuts lead to code that can be rigid, hard to understand, more prone to contain bugs, etc.
Improving and keeping code quality high leads to:
All of these benefits lead to less time spent on debugging, fixing problems, developers trying to understand how the code works, etc.
E.g. It saves your company real money!
There’s an adage that comes from the Navy SEALs which many have noticed also applies to creating software:
Slow is smooth. Smooth is fast.
Taking time to build quality code upfront will help your company move faster in the long-term. But even then, we don’t anticipate all future changes to the code and still need to refactor from time-to-time.
Software development is a critical part of our society.
Developers build code that controls:
I’m sure you can think of more cases where the software a developer creates is tied to the security and well-being of an individual or group of people.
Would you expect your doctor to haphazardly prescribe you medications without carefully ensuring that he/she knows what your medical condition is?
Wouldn’t you want to have a vehicle mechanic who takes the time to ensure your vehicle’s tires won’t fall off while you are driving?
Being a craftsman is just another way to say that we should be professional and care about our craft.
We should value quality software that will work as intended!
I’ve had it happen before that my car’s axel was replaced and, while I was driving away, the new axel fell right out of my car! Is that the kind of mechanic I can trust my business to? Nope!
Likewise, the quality of software that we build can directly impact people’s lives in real ways.
You might be familiar with an incident from 2018 where a Boeing 737 crashed and killed all people on board.
It was found that Boeing had outsourced its software development to developers who were not experienced in this particular industry:
Increasingly, the iconic American planemaker and its subcontractors have relied on temporary workers making as little as $9 an hour to develop and test software, often from countries lacking a deep background in aerospace.
Also, these developers were having to redo improperly written code over and over again:
The coders from HCL were typically designing to specifications set by Boeing. Still, “it was controversial because it was far less efficient than Boeing engineers just writing the code,” Rabin said. Frequently, he recalled, “it took many rounds going back and forth because the code was not done correctly.”
One former software engineer with Boeing is quoted as saying:
I was shocked that in a room full of a couple hundred mostly senior engineers we were being told that we weren’t needed.
While I have no beef with software developers from particular countries, it does concern me when a group of developers are lacking the knowledge or tools to build quality software in such critical systems.
For Boeing in general, what did this overall lack of quality and craftsmanship lead to?
The company’s stocks took a huge dip a couple of days after the crash.
Oh, and don’t forget - people died. No one can undo or fix this.
After it’s all said and done, Boeing did not benefit from cutting costs, trying to rush their software development and focus on speed rather than quality.
As software development professionals, we should seek to do our part and value being software craftsmen and craftswomen who focus on creating quality software.
Do you still think that because airplanes can potentially kill people that the software built is going to be quality? Nope.
Here’s another example of software quality issues in the aviation field: $300 million Airbus software bug solved by “turning it off and on again every 149 hours.”
Sound’s kind of like a memory leak? You know, when you have a web application that starts getting slow and clunky after keeping it opened for a while. Just refresh the page and voila! Everything’s fine again!
Sadly, we are building airplanes like that too…
Quoting the article:
Airlines who haven’t performed a recent software update on certain models of the Airbus A350 are being told they must completely power cycle the aircraft every 149 hours or risk “…partial or total loss of some avionics systems or functions,” according to the EASA.
Do you want to fly on those planes?
Quality matters. And the fact is, many developers are not writing quality software.
This post was an excerpt from Refactoring TypeScript which is designed as an approachable and practical tool to help developers get better at building quality software.
Don’t forget to connect with me on twitter or LinkedIn!
Combining .NET Core worker services with Coravel can help you build lightweight background job scheduling applications very quickly. Let’s take a look at how you can do this in just a few minutes!
Note: Worker services are lightweight console applications that perform some type of background work like reading from a queue and processing work (like sending e-mails), performing some scheduled background jobs from our system, etc. These might be run as a daemon, windows service, etc.
At the writing on this article, .NET Core 3 is in preview. First, you must install the SDK. You can use Visual Studio Code for everything else in this article 👍.
Coravel is a .NET Core library that gives you advanced application features out-of-the-box with near-zero config. I was inspired by Laravel’s ease of use and wanted to bring that simple and accessible approach of building web applications to .NET Core.
One of those features is a task scheduler that is configured 100% by code.
By leveraging Coravel’s ease-of-use with the simplicity of .NET Core’s worker service project template, I’ll show you how easily and quickly you can build a small back-end console application that will run your scheduled background jobs!
First, create an empty folder to house your new project.
Then run:
dotnet new worker
Your worker project is all set to go! 🤜🤛
Check out Program.cs and you’ll see this:
public static void Main(string[] args){ CreateHostBuilder(args).Build().Run();}public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureServices(services => { services.AddHostedService<Worker>(); });
Let’s add Coravel by running dotnet add package coravel
.
Next, in Program.cs, we’ll modify the generic code that was generated for us and configure Coravel:
public static void Main(string[] args){ IHost host = CreateHostBuilder(args).Build(); host.Services.UseScheduler(scheduler => { // We'll fill this in later ;) }); host.Run();}public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureServices(services => { services.AddScheduler(); });};
Since Coravel is a native .NET Core set of tools, it just works™ with zero fuss!
One of Coravel’s fundamental concepts is Invocables.
Each invocable represents a self-contained job within your system that Coravel leverages to make your code much easier to write, compose and maintain.
Next, then, create a class that implements Coravel.Invocable.IInvocable
:
public class MyFirstInvocable : IInvocable{ public Task Invoke() { Console.WriteLine("This is my first invocable!"); return Task.CompletedTask; }}
Since we are going to simulate some async work, we’ll just log a message to the console and then return Task.CompletedTask
to the caller.
Here’s where Coravel really shines 😉.
Let’s schedule our new invocable to run every 5 seconds. Inside of our Program.cs main method we’ll add:
host.Services.UseScheduler(scheduler => { // Yes, it's this easy! scheduler .Schedule<MyFirstInvocable>() .EveryFiveSeconds();});
Don’t forget to register your invocable with .NET Core’s service container:
.ConfigureServices(services =>{ services.AddScheduler(); // Add this 👇 services.AddTransient<MyFirstInvocable>();});
In your terminal, run dotnet run
.
You should see the output in your terminal every five seconds!
Sure, writing to the console is great - but you are going to be making API calls, database queries, etc. after all.
Let’s modify our invocable so that we can do something more interesting:
public class SendDailyReportEmailJob : IInvocable{ private IMailer _mailer; private IUserRepository _repo; public SendDailyReportEmailJob(IMailer mailer, IUserRepository repo) { this._mailer = mailer; this._repo = repo; } public async Task Invoke() { var users = await this._repo.GetUsersAsync(); foreach(var user in users) { var mailable = new DailyReportMailable(user); await this._mailer.SendAsync(mailable); } }}
Since this class will hook into .NET Core’s service container, all the constructor dependencies will be injected via dependency injection.
If you wanted to build a lightweight background application that processes and emails daily reports for all your users then this might be a great option.
While beyond the scope of this article, you can take a look at how .NET Core 3 will allow configuring your worker as a windows service.
And, apparently, there’s upcoming support for systemd too!
What do you guys think about .NET Core’s worker services?
I find they are so easy to get up-and-running. Coupled with the accessibility designed into Coravel, I find these two make an awesome pair for doing some cool stuff!
All of Coravel’s features can be used within these worker services - such as queuing tasks, event broadcasting, mailing, etc.
One thing I’d love to try is to integrate Coravel Pro with a worker service. One step at a time though 🤣.
I’ll be writing new content over at my new site/blog. Check it out!
Don’t forget to connect with me on:
]]>We all struggle with this!
In fact, it’s one of the two hardest parts of programming!
Sometimes it’s easy because you have a clear business concept in mind.
Other times you might be creating classes that aren’t necessarily tied to a business concept.
You might decide to split a class into several classes (for maintainability’s sake).
Or, you may need to use a design pattern.
🤦♂️
Writing code usually involved some specific scenario(s) that you are trying to solve. Here are some tips for tackling these kinds of business scenarios by naming your classes well:
Here’s what a sample of using tip #3 might look like:
Once an order is submitted by the customer, we need to make sure that all the order’s items are in stock. If some are not in stock, then we need to send them an email to let them know what items are on back-order.
Next, we will pass the information to the shipping department for further handling.
We can use this description to create some classes from the nouns. Sometimes, depending on how you want to organize your code, it’s very worthwhile to use the adjectives too!
Order
or SubmittedOrder
Customer
or OrderCustomer
OrderItems
OrderItemsOnBackOrderEmail
ShippingDepartmentOrderHandler
Keep in mind that the nouns/terms are all applicable to the specific context (see bounded contexts from DDD) you are trying to solve for.
The code we write might look like this at first pass:
class OrderSubmittedHandler { public async handle(order: Order, customer: Customer) { const submitted: SubmittedOrder = order.submitOrder(); if(submitted.hasItemsOnBackOrder()) { const mail = new OrderItemsOnBackOrder(submitted); mail.sendTo(customer); await this._mailer.send(mail); } const shipping = new ShippingDepartmentOrderHandler(submitted); await shipping.sendOrderInfo(); }}
Points of interest:
Notice that we explicitly model the transition from an Order
to a SubmittedOrder
. Since a SubmittedOrder
has different behaviours than an Order
, this will keep your classes way smaller and manageable by following the Single Responsibility Principle.
We introduced cross-cutting objects like the _mailer
variable. In this example, I already know that I am going to use dependency injection to supply the Mailer
. But that can be something you decide to refactor later on.
The entire scenario itself is captured by a meaningful noun as a new class!
Of course, this can be refactored after-the-fact. But as a first pass, this technique can really help to solidify what to name things and how they ought to interact.
Being verbose actually works well!
Here are some guidelines that might help you when dealing with specific kinds of classes that may or may not be obvious from a business scenario.
For example, sometimes you need to introduce a design pattern. What do you name your class then?
Intent | Formula | Examples |
---|---|---|
Authorization | Can{Entity}{Action} | CanAdminViewThisPage , CanManagerUpdateUserInfo |
Validation | Is{Target}{State}{Test} | IsAddressUpdateAllowed , IsUserCreationValid |
Interfaces | ICan{Action} | ICanSendMail , ICanBeValidated |
Concrete Business Concept | “What is it?” (nouns + adjectives) | Student , EmployeeUserProfile , ShippingAddress |
Use Cases | {Action}{Target} | ApproveOrder , SendWelcomeEmail |
Design Pattern | {Name}{Pattern} | IShippingAddressStrategy , HomeAddressStrategy , TemporaryAddressStrategy |
Don’t forget to connect with me on:
You can also find me at my web site www.jamesmichaelhickey.com.
An e-mail newsletter that will help you level-up in your career as a software developer! Ever wonder:
✔ What are the general stages of a software developer?
✔ How do I know which stage I’m at? How do I get to the next stage?
✔ What is a tech leader and how do I become one?
✔ Is there someone willing to walk with me and answer my questions?
Sound interesting? Join the community!
]]>These patterns also give developers a common language to speak about certain code structures.
E.g. If you are facing a certain problem, and I say “Try the strategy pattern…” then I don’t have to spend an hour explaining what you should do.
You can go look it up or, if you already know the pattern, go implement it!
Everyone is talking about the design patterns found in the famous gang of four book Design Patterns:
But, there are way more software design patterns not found in this book which I’ve found super helpful.
Some of them I’ve learned from Martin Fowler, others from Domain Driven Design, and other sources.
I figured I’d start to catalogue some of these by sharing them!
As a general rule, I’m going to use TypeScript for my code samples.
The pattern I want to start with is a close relative to the null object pattern that I’ve tweeted about before:
#csharp #dotnet
— James Hickey 🇨🇦👨💻 (@jamesmh_dev) November 28, 2018
C# tip 🔥 - a form of the null object pattern 👇 pic.twitter.com/fLpO5ibLhO
The null object pattern is a way of avoiding issues around null
states (like all the extra null
checking you need to do 😒).
You create a specialized version of a class to represent it in a “null” state, which exposes the same API or interface as the base object.
In other words, it’s kinda like a stub.
Here’s an example off the top of my head, in TypeScript:
// Concrete Orderclass Order { private _items: any[]; constructor(items: any[]) { this._items = items; } public placeOrder() { // API call or whatever... }}// Null "version" of the Orderclass NullOrder extends Order { constructor() { super([]); } public placeOrder() { // We just do nothing! // No errors are thrown! }}// Usage:const orders: Order[] = [ new Order(['fancy pants', 't-shirt']), new NullOrder()];for (const order of orders) { // This won't throw on nulls since we've // used the null object pattern. order.placeOrder();}
Imagine we had a scheduled background process that fetches multiple orders and tries to place them.
Like Amazon, we might have a complex process around placing orders that isn’t as linear as we might think it would be.
We might want a buffer period between the time when you click “place order” and when the order is really placed. This will make cancelling an order an easy process (it avoids having to remove credit card charges, etc.)
Note: I might write about this pattern later. 😉
In this scenario, we might be trying to process orders that have already been changed:
The null object pattern can help with this kind of scenario.
But even better, when you have multiple versions of these kinds of “special” cases, the special case pattern is here to save the day!
The special case pattern is essentially the same in implementation, but instead of modelling specific “null” states, we can use the same technique to model any special or non-typical cases.
Using the code example above, instead of having “null” versions of our Order
, by using the special case pattern we can implement more semantic and targeted variants of our Order
class:
class IncompleteOrder extends Order { constructor() { super([]); } public placeOrder() { // Do nothing... }}class CorruptedOrder extends Order { constructor() { super([]); } public placeOrder() { // Try to fix the corruption? }}class OrderOnFraudulentAccount extends Order { constructor() { super([]); } public placeOrder() { // Notify the fraud dept.? }}
As you can see, this pattern helps us to be very specific in how we model special cases in our code.
Some benefits are:
null
exception issuesOrder
class blowing up with tons of logicSo, when should you use the pattern?
You might consider this pattern whenever you see this type of logic in a class’ constructor:
constructor() { if(this.fraudWasDetected()) { this._fraudDetected = true; } else { this.fraudWasDetected = false; }}
Note: The refactoring for this will begin in the Oops, Too Many Responsibilities section below.
When you see something like the following, then you may want to consider the special case pattern:
const order = getOrderFromDB();if(order.fraudWasDetected()) { order.doFraudDetectedStuff();} else if(!order.hasItems()) { order.placeOrder();} // ... and more ...
Focusing on this example, why is this potentially a “code smell”?
At face value, this type of logic should be “baked into” the Order class(es).
Whoever is using the order shouldn’t have to know about this logic. This is all order specific logic. It shouldn’t be “asking” the Order
for details and then deciding how to use the Order
.
For more, see the tell don’t ask principle - which, most times, does indicate that your logic might be better suited inside the object you are using.
The first fix then is to move this logic to the inside of the Order
class:
class Order { public placeOrder() { if(this._fraudWasDetected()) { this._doFraudDetectedStuff(); } else if(!this._hasItems()) { this._placeOrder(); } }}
But, now we run into some issues: we are dealing with different responsibilities (placing orders, fraud detection, item corruption, etc.) in one class! 😓
Note: What follows can be applied to the constructor refactor too.
Special case pattern to the rescue!
// Note: I'm just highlighting the main parts, this won't compile 😋class CorruptedOrder extends Order { public placeOrder() { this._fixCorruptedItems(); super.placeOrder(); }}class OrderOnFraudulentAccount extends Order { public placeOrder() { this._notifyFraudDepartment(); }}class IncompleteOrder extends Order { public placeOrder() { // Do nothing... }}
Great! But how can we instantiate these different classes?
The beauty of design patterns is that they usually end up working together. In this case, we could use the factory pattern:
class OrderFactory { public static makeOrder(accountId: number, items: any[]): Order { if (this._fraudWasDetected(accountId)) { return new OrderOnFraudulentAccount(); } else if (items === null || items.length === 0) { return new EmptyOrder(); } // and so on.... }}
Note: This is a pretty clear example of when you would really want to use the factory pattern - which I do find can be easily over-explained. Hopefully, this helps you to see why you would want to use a factory in the first place.
We have split our Order
class into a few more special classes that can each handle one special case.
Even if the logic for, let’s say, processing a suspected fraudulent account/order is very complex, we have isolated that complexity into a targeted and specialized class.
As far as the consumers of the Order
class(es) go - they have no idea what’s going on under the covers! They can simply call order.placeOrder()
and let each class handle its own special case.
Have you ever encountered the special case pattern? Or perhaps any of the others I’ve mentioned?
Don’t forget to connect with me on:
You can also find me at my web site www.jamesmichaelhickey.com.
An e-mail newsletter that will help you level-up in your career as a software developer! Ever wonder:
✔ What are the general stages of a software developer?
✔ How do I know which stage I’m at? How do I get to the next stage?
✔ What is a tech leader and how do I become one?
✔ Is there someone willing to walk with me and answer my questions?
Sound interesting? Join the community!
]]>Today I want to show you a technique that I’ve been using which is similar to the guard clause pattern but is used in more advanced scenarios. These scenarios include when you have problems due to external dependencies or complex logic that is required in your guard clauses.
With guard clauses, we would take code that has nested conditional logic like this:
if(order != null){ if(order.Items != null) { this.PlaceOrder(order); } else { throw new ArgumentNullException("Order is null"); }}else { throw new ArgumentNullException("Order is null");}
Then we would invert the conditional logic and try to “fail fast”:
if(order?.Items == null){ throw new ArgumentNullException("Order is null");}this.PlaceOrder(order);
Notice that right-off-the-bat we will try to make the method fail? That’s what I mean by “failing fast”.
Next, we might create a reusable method out of this:
public static void IsNullGuardClause(this object me, string message){ if(me == null) { throw new ArgumentNullException(message); }}
Finally, we can use this guard clause anywhere we need:
order?.Items.IsNullGuardClause("Order is null");this.PlaceOrder(order);
This will keep our code much cleaner, avoid any nested conditions, and be way easier to reason about!
I love this pattern.
But, sometimes you might find yourself trying to build a type of guard clause which has certain external dependencies, like a repository or HttpClient
. Perhaps the logic for the actual guard is quite complex too.
Examples might include determining if:
What I like to do in these cases is use what I’ve been calling “Gate Classes.” They’re like guard clauses, but they are classes… Go figure.
Think of these as a series of gates which each request in the system has to go through (just like middleware). If any of them fail, the gate is closed and the request cannot proceed any further.
Let me show you what I mean.
Imagine we are building part of an insurance processing system.
Next, we have to check whether the claim is able to be approved, and if so, approve it.
Here’s our use case (Clean Architecture) or, as some might know it, our Command (CQRS):
public class ApproveInsuranceClaimCommand : IUseCase{ private readonly IInsuranceClaimRepository _claimRepo; private readonly IUserRepository _userRepo; public ApproveInsuranceClaimCommand( IInsuranceClaimRepository claimRepo, IUserRepository userRepo ) { this._claimRepo = claimRepo; this._userRepo = userRepo; } public async Task Handle(Guid claimId, int approvingUserId) { var user = await this._userRepo.Find(approvingUserId); // 1. Figure out if the user has permission to approve this... InsuranceClaim claim = await this._claimRepo.Find(claimId); // 2. Figure out if the claim is approvable... claim.Approve(); await this._claimRepo.Update(claim); }}
For the logic that will go into the comments I made, what if we required more repositories to make those checks?
Also, what if other use cases in our system needed to make those same checks?
Perhaps we have another use case called ApproveOnHoldInsuranceClaimCommand
that will approve an insurance claim that was, for some reason, placed on hold until further documentation was supplied by the customer?
Or, perhaps in other use cases we need to check if users are able to have permission to change a claim?
This will lead to messy code and a lot of copy and pasting!
Just like the guard clause refactoring pattern, why don’t we do the same thing but convert each guard clause into an entirely new class?
The benefits are that we can use dependency injection to inject any dependencies like repositories, HttpClient
s, etc. that only each gate class will require.
Now, our use case might look like this (keeping in mind that each gate class may do some complex logic inside):
public class ApproveInsuranceClaimCommand : IUseCase{ private readonly IInsuranceClaimRepository _claimRepo; private readonly CanUserApproveInsuranceClaimGate _canUserApprove; private readonly CanInsuranceClaimBeApprovedGate _claimCanBeApproved; public ApproveInsuranceClaimCommand( IInsuranceClaimRepository claimRepo CanUserApproveInsuranceClaimGate canUserApprove, CanInsuranceClaimBeApprovedGate claimCanBeApproved ) { this._claimRepo = claimRepo; this._canUserApprove = canUserApprove; this._claimCanBeApproved = claimCanBeApproved; } public async Task Handle(Guid claimId, int approvingUserId) { await this._canUserApprove.Invoke(approvingUserId); InsuranceClaim claim = await this._claimRepo.Find(claimId); await this._claimCanBeApproved.Invoke(claim); claim.Approve(); await this._claimRepo.Update(claim); }}
Notice that there’s no more need for the IUserRepository
since it will be handled by the CanUserApproveInsuranceClaimGate
gate class (and DI).
Note: Why didn’t I make an interface for each gate class? Just for simplicity. But yes, by using interfaces instead of a concrete class you may mock them much easier for testing.
Let’s look at how we might build the CanInsuranceClaimBeApprovedGate
gate class:
public class CanInsuranceClaimBeApprovedGate{ private readonly IInsuranceClaimAdjusterRepository _adjusterRepo; private readonly IInsuranceClaimLegalOfficeRepository _legalRepo; public CanInsuranceClaimBeApprovedGate( IInsuranceClaimAdjusterRepository adjusterRepo, IInsuranceClaimLegalOfficeRepository legalRepo ) { this._adjusterRepo = adjusterRepo; this._legalRepo = legalRepo; } public async Task Invoke(InsuranceClaim claim) { // Do some crazy logic from the data returned from each repository! // On failure, throw a general gate type exception that can be handled // by middleware or a global error handler somewhere at the top of your stack. throw new GateFailureException("Insurance claim cannot be approved.") }}
Each gate class will either succeed or fail.
On failure, it will throw an exception that will be caught up the stack. In web applications, there is usually some global exception handler or middleware that can convert these into specific HTTP error responses, etc.
If we do need to use this logic in other places, as mentioned above, then we don’t need to re-import all the dependencies required for this logic. We can just simply use the gate class as-is and allow the DI mechanism to plug in all the dependencies for us.
It’s worth mentioning, that in some cases your use cases and your gate classes may need to call the same repository method. You don’t want to be fetching that data twice (once in your gate class and once in your use case).
In this event, there are ways to fix it.
One is to build a cached repository using the Decorator pattern.
You might rig this up as a scoped dependency (in .NET Core) so the cached data will only be cached for the lifetime of the HTTP request. Or you might just set a timeout on the cache.
Another way is to allow the use case to inject the raw data into the gate class as a dependency.
In any event, this pattern is very helpful in making your code much easier to test, use and maintain!
Don’t forget to connect with me on twitter or LinkedIn!
I started off having to use technologies and frameworks I had no experience with yet (ASP.NET, T-SQL, etc.)
For the first few months, I felt overwhelmed.
I took forever to do simple things like adding a checkbox to a web page that would get stored in the database!
Maybe you’ve been there too!
After a few months, from time-to-time, I would build SQL scripts to migrate large sets of data from one vehicle manufacturer’s old platform to this new one.
Think something like ALL of Mercedes-Benz USA’s data (sales figures, inventory, etc.) for ALL dealerships in the United States over a period of a decade. Lots of data.
One of the core problems with migrating the data was that, for example, certain codes (for financial accounts, SKUs, etc.) in one system might look totally different in another. It might not even exist at all!
We had to extract the proper code by parsing patterns or manipulating old codes.
Now, while in college, I fell in love with regular expressions. So I was really good at them (not so much anymore! 😂).
My colleagues - peers with over a decade in the industry - would get really surprised when they would see my scripts. I was basically just using regular expressions and SQL to manipulate these data sets. My scripts would typically take just a few SQL statements to do the job.
My colleagues, on-the-other-hand, were building individual command-line executables with C# that would first fetch some data from the database, loop through them and do some stuff and put some data back into the database.
My solutions would take a few hours or minutes to run, while theirs would take days to run.
Quickly, I became the “go-to” guy for whenever the team had encountered some really hard pieces of data to extract! All because I knew regular expressions really well!
Go figure!
So, what came out of that?
Trust.
Respect.
Confidence.
Opportunities.
That placed me in the minds of my peers as someone who was a skilled programmer.
Sure, I was known for doing something well in a very specific situation.
But, our minds usually don’t place significance on people who are good at doing the general day-to-day stuff. The people who are remarkable at something always stick out.
The people who are remarkable at something always stick out.
Where does this lead?
When new opportunities arise to learn something new or jump onto some new projects, you’ll be at the top of everyone’s mind.
You’ll simply be viewed as someone who can do remarkable things.
Now, I’m not going to tell you to learn regular expressions 😂.
But, for me, it started with regular expressions. Then it was front-end development. Then it was modelling business rules and system architecture.
Here’s my advice: Find a gap within your team or company that could help solve some important problems your company is facing.
But you’re thinking - “Easier said than done!”
Practically speaking, if you are early in your career, just keep an eye out for whenever your peers seem surprised at some skill or way of doing things you have.
This might indicate a gap that could be helpful to your organization!
For those more seasoned developers, it should be more apparent what gaps need to be filled. And, the more experienced you are, the more you should naturally drift into a position of mentoring others.
Once you are seasoned, you can use this principle and apply it to the global community - not just your internal company. Try to figure out where there is a need or where certain trends seem to be pointing.
Here are some diverse examples that are taken off the top of my head (not all devs):
Yes, it’s hard.
If you can’t find anything yet, then just pick something! You need to stick out!
You can always move into other areas of specialization later if you find something else too!
Dan Abramov used to be a .NET developer - now he’s known as one of the top react.js developers!
Don’t forget to connect with me on twitter or LinkedIn!
You need to know how to deal with people, understand how software is best built from a high-level, best practices and trade-offs between them, etc.
We’ll look at a question I received from someone who is struggling with the fact that he just graduated from university - but has no idea how to apply everything he’s learned to solve real-world problems.
Here’s the question I was given:
“Does knowing how to code in a particular language makes one a developer? I just graduated from the university and want to be a professional developer, what must I learn, I’m learning C# and Python now. I’m confused because I don’t even know how to apply the coding I’m learning in solving real life problems?”
This is a problem 😋
This is why I prefer community college vs. universities. Usually, a 2-year college program will give you much more practical experience and knowledge than a university usually gives in 4-years!
Anyways - a topic for another day.
The answer: You need to build some real-life projects that you can show off!
Companies want to hire people who can demonstrate that they can solve real problems. The questioner understands this - otherwise, they wouldn’t be asking the question. 😜
You need to be a problem solver - who just happens to know how to build software to solve some of these problems.
Your task then is to find some problem - big or small. Then build something to solve it.
Let me give you a couple examples.
I had the idea one day “I wish .NET development could be as easy as using Laravel”.
.NET is very robust but lacks some of the ease-of-use that frameworks for other languages have - like Laravel.
So, I created a solution to that problem! That was why Coravel was born.
If you turn something like this into a GitHub repo then you can showcase things like:
Another option is to build an entire app!
I did this when I was learning how to build web apps using Laravel years ago. It’s just a small social media spoof app.
This app uses Vue.js and SASS for the front-end and Laravel for the back-end.
In this case, I demonstrated that I could do front-end and back-end development, among other things.
If you can’t make up your mind then just do this. Just make something that will demonstrate you can solve problems with code!
No. It just means you can make something that someone else told you to build.
Sadly, many companies will interview based on irrelevant technical knowledge alone (which doesn’t test if you can think for yourself, etc.)
You need to be someone who can first identify problems within a company or community. Then, you can decide whether or not building software is a good fit for solving it.
What are some other skills and qualities that will help you become a quality developer?
That is by no means an exhaustive list - but are fundamental to becoming a quality developer.
If you don’t know most of these topics then I would suggest learning a little bit about each one in general.
Then, pick a few to really dive into.
Knowing some of these really well can help you stand out among your peers.
But that’s a topic for another day.
Don’t forget to connect with me on twitter or LinkedIn!
Is that true - is it really fundamental?
Dependency injection is baked into .NET Core. And, it’s there for a reason: it promotes building classes having loose coupling and gives developers tools to build maintainable, modular and testable software.
It also provides library authors with tools that can help make installation/configuration of their libraries very simple and straightforward.
As you guessed, this article will go over some things I’ve learned about DI in .NET Core, along with my suggestions for what you should know. 😊
To begin, I want to explore DI for those who may not be too familiar with dependency injection. We’ll start with the basics and move toward some more advanced scenarios.
If you already know what DI is, and how to use interfaces to mock your classes and test them, etc. then you can move onto the Using Dependency Injection In .NET Core section.
Yes, this is going to be a long one. Get ready. 😎
If you aren’t familiar with DI, it pretty much just refers to passing dependencies into your objects as an external arguments.
This can be done via an object’s constructor or method.
// Argument "dep" is "injected" as a dependency.public MyClass(ExternalDependency dep){ this._dep = dep;}
Dependency injection then is, at a fundamental level, just passing dependencies as arguments. That’s it. That’s all.
Well… if that was really all of what DI is - I wouldn’t be writing this. 😜
Why would you want to do this? A few reasons:
Let’s look briefly at the idea that this promotes testability (which in turn affects all the other points mentioned).
Why do we test code? To make sure our system behaves properly.
This means that you can trust your code.
With no tests, you can’t really trust your code.
I discuss this in more detail in another blog post about Refactoring Legacy Monoliths - where I discuss some refactoring techniques around this issue.
Of course, DI is more than “just passing in arguments.” Dependency injection is a mechanism where the runtime (let’s say - .NET Core) will automatically pass (inject) required dependencies into your classes.
Why would we ever need that?
Look at this simple class:
public class Car{ public Car(Engine engine) { this._engine = engine; }}
What if, somewhere else, we needed to do this:
Car ferrari = new Car(new Engine());
Great. What if we wanted to test this Car
class?
The problem is in order to test Car
you need Engine
. This is a “hard” dependency, if you will.
This means that these classes are tightly tied together. In other words, tightly coupled. Those are bad words.
We want loosely coupled classes. This makes our code more modular, generalized and easier to test (which means more trust and more flexibility).
Some common techniques when testing are to use “mocks”. A mock is just a stubbed-out class that “pretends” to be a real implementation.
We can’t mock concrete classes. But, we can mock interfaces!
Let’s change our Car
to rely on an interface instead:
public class Car{ public Car(IEngine engine) { this._engine = engine; }}
Cool! Let’s test that:
// Mock code configuration would be here.// "mockEngine" is just a stubbed IEngine.Car ferrari = new Car(mockEngine);Assert.IsTrue(ferrari.IsFast());
So now we are testing the Car
class without a hard dependency on Engine
. 👍
I had mentioned that using DI allows your code to be modular. Well, it’s not really DI that does, but the technique above (relying on interfaces).
Compare these two examples:
Car ferrari = new Car(new FastEngine());
and
Car civic = new Car(new HondaEngine());
Since we are relying on interfaces, we have way more flexibility as to what kinds of cars we can build!
Another benefit is that you don’t need to use class inheritance.
This is something I see abused all the time. So much so that I do my best to “never” use class inheritance.
It’s hard to test, hard to understand and usually leads to building an incorrect model anyways since it’s so hard to change after-the-fact.
99% of the time there are better ways to build your code using patterns like this - which rely on abstractions rather than tightly coupled classes.
And yes - class inheritance is the most highly coupled relationship you can have in your code! (But that’s another blog post 😉)
The example above highlights why we need DI.
Dependency injection allows us to “bind” a specific type to be used globally in place of, for example, a specific interface.
At runtime we rely on the DI system to create new instances of these objects for us. All the dependencies are handled automatically.
In .NET Core, you might do something like this to tell the DI system what classes we want to use when asking for certain interfaces, etc.
// Whenever the type 'Car' is asked for we get a new instance of the 'Car' type.services.AddTransient<Car,Car>(); // Whenever the type 'IEngine' is asked for we get a new instance of the concrete 'HondaEngine' type.services.AddTransient<IEngine, HondaEngine>();
Car
relies on IEngine
.
When the DI system tries to “build” (instantiate) a new Car
it will first grab a new HondaEngine()
and then inject that into the new Car()
.
Whenever we need a Car
.NET Core’s DI system will automatically rig that up for us! All the dependencies will cascade.
So, In an MVC controller we might do this:
public CarController(Car car){ this._car = car; // Car will already be instantiated by the DI system using the 'HondaEngine' as the engine.}
Alright - the car example was simple. That’s to get the basics down. Let’s look at a more realistic scenario.
Get ready. 😎
We have a use case for creating a new user in our app:
public class CreateUser{ // Filled out later...}
That use case needs to issue some database queries to persist new users.
In order to make this testable - and make sure that we can test our code without requiring the database as a dependency - we can use the technique already discussed:
public interface IUserRepository{ public Task<int> CreateUserAsync(UserModel user);}
And the concrete implementation that will hit the database:
public class UserRepository : IUserRepository{ private readonly ApplicationDbContext _dbContext; public UserRepository(ApplicationDbContext dbContext) => this._dbContext = dbContext; public async Task<int> CreateUserAsync(UserModel user) { this._dbContext.Users.Add(user); await this._dbContext.SaveChangesAsync(); }}
Using DI, we would have something like this:
services.AddTransient<CreateUser, CreateUser>();services.AddTransient<IUserRepository, UserRepository>();
Whenever we have a class that needs an instance of the IUserRepository
the DI system will automatically build a new
UserRepository
for us.
The same can be said for CreateUser
- a new CreateUser
will be given to us when asked (along with all of it’s dependencies already injected).
Now, in our use case we do this:
public class CreateUser{ private readonly IUserRepository _repo; public CreateUser(IUserRepository repo) => this._repo = repo; public async Task InvokeAsync(UserModel user) { await this._repo.CreateUserAsync(user); } }
In an MVC controller, we can “ask” for the CreateUser
use case:
public class CreateUserController : Controller{ private readonly CreateUser _createUser; public CreateUserController(CreateUser createUser) => this._createUser = createUser; [HttpPost] public async Task<ActionResult> Create(UserModel userModel) { await this._createUser.InvokeAsync(userModel); return Ok(); } }
The DI system will automatically:
CreateUser
.CreateUser
depends on the IUserRepository
interface, the DI system will next look to see if there is a type “bound” to that interface. UserRepository
. UserRepository
.CreateUser
as the implementation of it’s constructor argument IUserRepository
.Some benefits that are obvious:
And the final benefit, again, we can test this without needing to hit the database.
// Some mock configuration...var createUser = new CreateUser(mockUserRepositoryThatReturnsMockData);int createdUserId = await createUser.InvokeAsync(dummyUserModel);Assert.IsTrue(createdUserId == expectedCreatedUserId);
This makes for:
CreateUser
)Now I want to run through some of the more proper and technical terms that you should know, along with recommended pieces of knowledge around .NET Core’s DI system.
When we refer to the “DI system” we are really talking about the Service Provider.
In other frameworks or DI systems this is also called a Service Container.
This is the object that holds the configuration for all the DI stuff.
It’s also what will ultimately be “asked” to create new objects for us. And therefore, it’s what figures out what dependencies each service requires at runtime.
When we talk about binding, we just mean that type A
is mapped to type B
.
In our example about the Car
scenario, we would say that IEngine
is bound to HondaEngine
.
When we ask for a dependency of IEngine
we are returned an instance of HondaEngine
.
Resolving refers to the process of figuring out what dependencies are required for a particular service.
Using the example above with the CreateUser
use case, when the Service Provider is asked to inject an instance of CreateUser
we would say that the provider is “resolving” that dependency.
Resolving involves figuring out the entire tree of dependencies:
CreateUser
requires an instance of IUserRepository
IUserRepository
is bound to UserRepository
UserRepository
requires an instance of ApplicationDbContext
ApplicationDbContext
is available (and bound to the same type).Figuring out that tree of cascading dependencies is what we call “resolving a service.”
Generally termed scopes, or otherwise called service lifetimes, this refers to whether a service is short or long living.
For example, a singleton (as the pattern is defined) is a service that will always resolve to the same instance every time.
Without understanding what scopes are you can run into some really weird errors. 😜
The .NET Core DI system has 3 different scopes:
services.AddSingleton<IAlwaysExist, IAlwaysExist>();
Whenever we resolve IAlwaysExist
in an MVC controller constructor, for example, it will always be the exact same instance.
As a side note: This implies concerns around thread-safety, etc. depending on what you are doing.
services.AddScoped<IAmSharedPerRequests, IAmSharedPerRequests>();
Scoped is the most complicated lifetime. We’ll look at it in more detail later.
To keep it simple for now, it means that within a particular HttpRequest (in an ASP .NET Core application) the resolved instance will be the same.
Let’s say we have service A
and B
. Both are resolved by the same controller:
public SomeController(A a, B b){ this._a = a; this._b = b;}
Now imagine A
and B
both rely on service C
.
If C
is a scoped service, and since scoped services resolve to the same instance for the same HTTP request, both A
and B
will have the exact same instance of C
injected.
However, a different C
will be instantiated for all other HTTP requests.
services.AddTransient<IAmAlwaysADifferentInstance, IAmAlwaysADifferentInstance>();
Transient services are always an entirely new instance when resolved.
Given this example:
public SomeController(A a, A anotherA){}
Assuming that type A
was configured as a transient service, variables a
and anotherA
would be different instances of type A
.
Note: Given the same example, if A
was a scoped service then variables a
and anotherA
would be the same instance. However, in the next HTTP Request, if A
was scoped then a
and anotherA
in the next request would be different from the instances in the first request.
If A
was a singleton, then variables a
and anotherA
in both HTTP requests would reference the same single instance.
There are issues that arise when using differently scoped services who are trying to depend on each other.
Just don’t do it. It doesn’t make sense 😜
public class A{ public A(B b) { }}public class B{ public B(A a){ }}
A singleton, again, lives “forever”. It’s always the same instance.
Transitive services, on the other hand, are always a different instance when requested - or resolved.
So here’s an interesting question: When a singleton depends on a transitive dependency how long does the transitive dependency live?
The answer is forever. More specifically, as long as it’s parent lives.
Since the singleton lives forever so will all of it’s child objects that it references.
This isn’t necessarily bad. But it could introduce weird issues when you don’t understand what this setup implies.
Perhaps you have a transitive service - let’s call it ListService
that isn’t thread-safe.
ListService
has a list of stuff and exposes methods to Add
and Remove
those items.
Now, you started using ListService
inside of a singleton as a dependency.
That singleton will be re-used everywhere. That means, on every HTTP Request. Which implies on many many different threads.
Since the singleton accesses/uses ListService
, and ListService
isn’t thread-safe - big problems!
Be careful.
Let’s assume now that ListService
is a scoped service.
If you try to inject a scoped service into a singleton what will happen?
.NET Core will blow up and tell you that you can’t do it!
Remember that scoped services live for as long as an HTTP request?
But, remember how I said it’s actually more complicated than that?…
Under the covers .NET Core’s service provider exposes a method CreateScope
.
Note: Alternatively, you can use IServiceScopeFactory
and use the same method CreateScope
. We’ll look at this later 😉
CreateScope
creates a “scope” that implements the IDisposable
interface. It would be used like this:
using(var scope = serviceProvider.CreateScope()){ // Do stuff...}
The service provider also exposes methods for resolving services: GetService
and GetRequiredService
.
The difference between them is that GetService
returns null when a service isn’t bound to the provider, and GetRequiredService
will throw an exception.
So, a scope might be used like this:
using(var scope = serviceProvider.CreateScope()){ var provider = scope.ServiceProvider; var resolvedService = provider.GetRequiredService(someType); // Use resolvedService...}
When .NET Core begins an HTTP request under the covers it’ll do something like that. It will resolve the services that your controller may need, for example, so you don’t have to worry about the low-level details.
In terms of injecting services into ASP controllers - scoped services are basically attached to the life of the HTTP request.
But, we can create our own services (which would then be a form of the Service Locator pattern - more on that later)!
So it’s not true that scoped services are only attached to an HTTP request. Other types of applications can create their own scopes within whatever lifespan or context they need.
Notice how each scope has it’s own ServiceProvider
? What’s up with that?
The DI system has multiple Service Providers. Woah 🤯
Singletons are resolved from a root service provider (which exists for the lifetime of your app). The root provider is not scoped.
Anytime you create a scope - you get a new scoped service provider! This scoped provider will still be able to resolve singleton services, but by proxy they come from the root provider as all scoped providers have access to their “parent” provider.
Here’s the rundown of what we just learned:
So what happens when we try to resolve a scoped service from the root provider (a non-scoped provider)?…
Boom 🔥
All that to say that scoped services require a scope to exist.
Singletons are resolved by the root provider.
Since the root provider has no scope (it’s a “global” provider in a sense) - it just doesn’t make sense to inject a scoped service into a singleton.
What about a scoped service who relies on a transitive service?
In practice it’ll work. But, for the same reasons as using a transitive service inside a singleton, it may not behave as you expect.
The transitive service that is used by the scoped service will live as long as the scoped service.
Just be sure that makes sense within your use-case.
As library authors we sometimes want to provide native-like tools. For example, with Coravel I wanted to make the library integrate seamlessly with the .NET Core DI system.
How do we do that?
As mentioned in passing, .NET Core provides a utility for creating scopes. This is useful for library authors.
Instead of grabbing an instance of IServiceProvider
, library authors probably should use IServiceScopeFactory
.
Why? Well, remember how the root service provider cannot resolve scoped services? What if your library needs to do some “magic” around scoped services? Oops!
Coravel, for example, needs to resolve certain types from the service provider in certain situations (like instantiating invocable classes).
Entity Framework Core contexts are scoped, so doing things such as performing database queries inside your library (on behalf of the user/developer) is something you may want to do.
This is something that Coravel Pro does - execute queries from the user’s EF Core context automatically under-the-covers.
As a side note, issues around capturing services in a closure to be used in a background Task
and/or resolving services from a background Task
also facilitate the need for resolving services manually (which Coravel needs to do).
David Fowler has written briefly about this here if interested.
In general, the service locator pattern is not a good practice. This is when we ask for a specific type from the service provider manually.
using(var scope = serviceProvider.CreateScope()){ var provider = scope.ServiceProvider; var resolvedService = provider.GetRequiredService(someType); // Use resolvedService...}
However, for cases like mentioned above, it is what we need to do - grab a scope, resolve services and do some “magic”.
This would be akin to how .NET Core prepares a DI scope and resolves services for your ASP .NET Core controllers.
It’s not bad because it’s not “user code” but “framework code” - if you will.
We looked at some reasons behind why dependency injection is a useful tool at our disposal.
It helps to promote:
Then we looked at how dependency injection in .NET Core is used, and some of the lower-level aspects of how it works.
In general, we found problems arise when services rely on other services who have a shorter lifetime.
Finally we looked at how .NET Core provides library authors with some useful tools that can help integration with .NET Core’s DI system seamless.
I hoped you learned something new! As always, leave some your thoughts in the comments 👌
I’ll be writing new content over at my new site/blog. Check it out!
Don’t forget to connect with me on twitter or LinkedIn!
In the spirit of the season, we’ll be discussing how Santa Clause has recently been using .NET Core to build his internal Christmas present processing system.
Santa is an intermediate developer but has been learning the ins-and-outs of .NET Core.
He recently needed to build a system that was robust in terms of security and ease of development.
He decided that .NET Core was the best choice when considering these criteria.
Santa didn’t want to re-invent the wheel - but he needed a reliable yet simple way to schedule background tasks, queue work so his web app was responsive (mostly for the elves), etc.
One day he came across Coravel - which is a near-zero config open source library for .NET Core developers.
Coravel focuses on helping developers get their web applications up-and-running fast - without compromising code quality. It makes what are usually very advanced features super easy-to-use and accessible - without needing to install any extra 3rd-party infrastructure:
Because it’s written specifically as a set of tools targeted for .NET Core, it takes advantage of native features - such as full support for the built-in dependency injection services and the hosted background services.
For example, you can inject dependencies/services into scheduled tasks, queued tasks, event listeners, etc. with zero fuss!
Santa has really enjoyed using Coravel - especially the time savings gained from not having to configure and install other dependencies for scheduling, queuing, event broadcasting, etc. individually.
He especially loves that Coravel ties into .NET Core’s DI system so seamlessly.
But - now Santa has to schedule some really long running tasks. Tasks that might take hours to run. And these are important tasks.
Doing this inside his ASP .NET Core application is not an option since doing these types of long-running tasks in a web app will cause issues (as you probably know).
Santa decided to check out Coravel’s GitHub repo - just in case this has been addressed before.
It turns out that there is a sample to address this exact concern!
I asked Santa if I could share how he decided to implement this. He agreed, but I was only permitted to show a very small sample of his system.
One of the benefits of Coravel being specially for .NET Core is that it’s so simple to configure.
Combined with one of .NET Core’s coolest features, HostBuilder
, and you can do some really powerful things in just a few lines of code.
The HostBuilder
, by the way, let’s you construct a .NET Core application by adding just the specific pieces you need. Then you can “host” whatever you need (mini-API endpoints, console app running background tasks, or multiple hosted services) without the full dependencies needed for a typical web project.
Because Coravel is not a port of a .NET Framework library, but is specially built for .NET Core, Coravel’s features - such as scheduling and queuing - are implemented as a hosted services. This means Coravel is 100% compatible in non-web scenarios.
Using the sample mentioned above, let’s look at a very basic implementation of using HostBuilder
along with Coravel’s scheduling:
class Program{ static void Main(string[] args) { var host = new HostBuilder() .ConfigureAppConfiguration((hostContext, configApp) => { configApp.AddCommandLine(args); }) .ConfigureServices((hostContext, services) => { // Add Coravel's Scheduling... services.AddScheduler(); }) .Build(); // Configure the scheduled tasks.... host.Services.UseScheduler(scheduler => scheduler .Schedule(() => Console.WriteLine("This was scheduled every minute.")) .EveryMinute() ); // Run it! host.Run(); }}
This will be a console app that hooks into Coravel’s scheduler.
Every minute something will happen.
Ok - that’s a simple sample. Santa needs something more maintainable and he really needs to inject his Entity Framework Core Db context, HttpClientFactory
, etc. into his scheduled tasks.
With Coravel you can use Invocable classes to solve this problem.
Invocables are ubiquitous classes that can be scheduled, queued, etc. with full support for .NET Core dependency injection.
They represent some “job” in your application:
- Sending automated emails
- Cleaning your database
- Processing messages from an external queue
- Syncing data between an external API and your system
Santa has an API that he uses to fetch who is nice and naughty (it’s a secret end-point from his legacy system). He then needs to do some CPU intensive processing to validate the data, and then store the results in his database.
He needs this done once every hour using the invocable classes he’s created.
scheduler .Schedule<PutNaughtyChildrenFromAPIIntoDb>() .Hourly();scheduler .Schedule<PutNiceChildrenFromAPIIntoDb>() .Hourly();
Coravel internally uses Task.WhenAll
to make sure async calls within scheduled tasks are processed efficiently.
However, CPU intensive tasks will “hog” the thread that is currently processing due tasks. This would force other due tasks to wait until the CPU intensive processing is completed.
This design makes sure that in web application scenarios Coravel won’t be hogging multiple threads that could otherwise be (and should be) used to respond to HTTP requests.
But Santa is specifically using a console application so he doesn’t need to worry about that!
What should he do?
Coravel solves this problem with Schedule Workers.
By using schedule workers Santa can put each of these tasks onto their own dedicated pipeline/thread:
scheduler.OnWorker("NaughtyWorker");scheduler .Schedule<PutNaughtyChildrenFromAPIIntoDb>() .Hourly();scheduler.OnWorker("NiceWorker");scheduler .Schedule<PutNiceChildrenFromAPIIntoDb>() .Hourly();
This will now execute each task in parallel so Santa can fetch and process both of these more efficiently. Any intensive CPU work that either task may perform will not cause the other to wait/block.
In some cases, you may want to put multiple tasks onto one worker and perhaps dedicate one worker for a task that is known to either take a long time or is just CPU intensive.
For example:
scheduler.OnWorker("EmailTasks");scheduler .Schedule<SendNightlyReportsEmailJob>().Daily();scheduler .Schedule<SendPendingNotifications>().EveryMinute();scheduler.OnWorker("CPUIntensiveTasks");scheduler .Schedule<RebuildStaticCachedData>().Hourly();
The first two tasks don’t take that long to complete and are not a priority in terms of CPU usage (they are mostly I/O bound).
The second, however, needs to be able to work without being blocked or blocking other tasks.
This setup will ensure that RebuildStaticCachedData
is always executed in isolation with it’s own dedicated pipeline.
I hope you’ve enjoyed this article and would love to hear from you in the comments! Maybe there’s a better way to do this? Maybe you simply prefer some other way?
I’ll be writing new content over at my new site/blog. Check it out!
Don’t forget to connect with me on twitter or LinkedIn!
.NET Core really is the next generation of .NET development and I believe that it’s time for .NET developers everywhere to migrate!
For those new to .NET Core, I’d like to quickly explain what .NET Core is.
Imagine being able to run C# apps (natively) in Linux.
Imagine being able to use “npm like” tools that scaffold new projects for you.
What if you didn’t need Visual Studio anymore? What if you could build entire apps using VS Code? 🤯
.NET Core is basically the .NET languages (C#, F#, etc.) re-written onto a completely new runtime/sdk.
This runtime is not Windows specific. That means it runs on Linux, Mac and Windows.
It offers developers modern tooling that can scaffold projects, build, run, test and deploy using incredibly easy-to-use CLI tools.
.NET Core isn’t simply a huge framework (like .NET Framework) that you built on top of. It’s included as NuGet packages that you can use to carefully craft only the little tiny pieces that you need - if you do need that level of flexibility.
Since .NET apps have much less of a footprint, they are perfect for scenarios where you need to build small, high-performing, isolated applications - Micro-services.
Aren’t all the massive/large scale apps using Nodejs though? Nodejs is typically touted as the highest performing and most scalable platform for building modern web apps. Is that true?
I want to go through some of the companies and products that are currently using .NET Core in production (taken from these case studies on the official .NET site). We’ll look at why they decided to use .NET Core and what benefits they found were critical to their businesses.
Quoting the official site:
Using the same-size server, we were able to go from 1,000 requests per second per node with Node.js to 20,000 requests per second with .NET Core.
That’s pretty amazing! That’s an increase of 2000% in terms of requests per second. Imagine how much money you could save by moving to a smaller hosting environment because you don’t need so much “juice”?
It also gives us strong benefits with regard to operation costs in the cloud, because we can use it to run some workloads on Linux machines.
Not only does .NET Core allow you to run-more-code-on-less-machine, but you can also run an entire .NET Core app within a Linux operating system. Since they are generally free, the cost savings of not requiring paid OS licenses can save a lot of money.
Services can be developed more quickly, perform faster in production, and scale better if they’re written using .NET Core
.NET Core gives us the freedom to take advantage of new infrastructure technologies that run on Linux such as Kubernetes and Docker.
.NET Core was designed to allow you to create small isolated services and scale them independently if needed.
You don’t need to buy a new massive server just because one small part of your app is seeing a higher load. Just build a .NET Core app and stick it into a container.
Now you can infinitely scale your app as needed!
The fact that .NET Core is cross-platform allows developers more freedom in how they develop the product because, at the end of the day, it’s going to run on .NET Core, and that will be macOS, Linux, or Windows
.NET Core can target Linux, Windows or Mac. This means you can build .NET Core apps using a Mac or Ubuntu - using VS Code or Sublime Text. You have more freedom now to use the tools that work for you.
Personally, I’ve been using VS Code to build all my .NET Core apps and libraries. I don’t need Windows. I don’t need Visual Studio. And it’s great!
ASP.NET is open source, that allows us to contribute back to it if we have any performance issues which Microsoft review and together we make a better product.
Ben Adams is considered one of the most knowledgeable .NET developers in the world. He doesn’t work for Microsoft - he’s the CTO of Illyriad Games.
Since .NET Core is open source - Ben has been able to be a part of many non-Microsoft employed developers who have made .NET Core more performant, added new features and provided insights into the product.
“But I’m not some super sized-organization” - you might say. What about building my side-projects quickly?
.NET Core gives you some fantastic tools that can accelerate your development:
On top of all that, I’ve been building an open source library that can accelerate building .NET Core web apps even faster!
Coravel gives you a near-zero config set of tools such as Task Scheduling, Queuing, Caching and a CLI that lets you scaffold even more so you can be super productive.
Thinking about building your next project using .NET Core? Start here.
Do you have a .NET Framework app that you are considering migrating to .NET Core? Start here.
Already know how to use .NET Core but need more tools that will help you build fully featured web apps? Start here.
I’ll be writing new content over at my new site/blog. Check it out!
After all, I just want to build awesome apps and not tinker with the same old boilerplate stuff.
What if I just want to schedule my code to run once a week? I don’t want to configure Windows Task Scheduler. Or Cron. I just want to tell my code to schedule itself. Shouldn’t that be easy to do (like with Laravel)?
So I set out to learn some .NET Core while building an easy to use scheduler - similar to Laravel’s. Just to challenge myself and see if I could do it. Eventually, “version 1” was completed.
I don’t have much free time, so I need to make sure what I do is important. So I decided to tweet what I had on Twitter to maybe get some feedback. Who knows…
To my surprise, David Fowler (.NET Core Architect) responded and the tweet gained way more interactions than I ever thought it would!
What if you could do this in your .Net Core apps? https://t.co/dj7JgbVELC#dotnet #coravel #dotnetcore pic.twitter.com/KNHZwJbrg0
— James Hickey (@jamesmh_dev) June 22, 2018
That was on a Friday. In just two days the Coravel repo became the top #4 trending C# repo on all of GitHub! Wow! I was (and still am) super humbled and surprised!
Obviously, people thought this was a useful tool. So I decided to keep going and build other features that I personally would like available:
When I had the time, I did my best to build them. Today, there are even more features than I had anticipated.
Originally, the scheduler was just a fun challenge. But now people were actually using it. So I had to re-write it to be more flexible, more unit tests, thread safety, etc.
Throughout this journey, I’ve learned tons - and am still learning each day.
I wanted to start documenting some of these things - and give tips to those wanting to build their own .NET based open source projects.
I had no experience building NuGet packages. I’ve never done it before.
When I had originally built the Mailer feature, it was a separate .NET project that was referenced by the “main” Coravel project. My thinking was that NuGet packages just swallowed any referenced projects and included them.
Wrong.
Nuget packages don’t do that. And, it sounds like they never will.
What do you do? There are two solutions to my knowledge:
The second option does force you to focus on building small modular packages.
But one of the driving philosophies behind Coravel is that it should be as easy and possible to get up-and-running. Multiple packages, in my eyes, was straying from this.
I chose to go with option 1. I didn’t want to have to maintain more than one package (and all the extra maintenance that comes with it).
But what happens when you make the wrong choice?
I had comments from people around the idea of not wanting to include the Mailer’s dependencies (MailKit, etc,) when just wanting to use, for example, the scheduler. Fair enough.
So I ran I poll to see what others thought:
Question for peeps:
— James Hickey (@jamesmh_dev) August 31, 2018
Right now Coravel includes MailKit as a dependency for the Mailer. Should the Mailer be a separate nuget so that using the Scheduler doesn't include MailKit? Should all the features be split?#AspNetCore #ASPNET #Dotnet #dotnetcore #coravel
Since this project is for others to use - it’s important to know how people generally are wanting to use it.
But, I’m not fond of having to maintain multiple packages. The technical complexities this would introduce (splitting all the features) is not wanted right now either.
So, I decided to go with splitting the Mailer into its own NuGet package.
So, I had made the wrong choice in bundling the Mailer into the main Coravel package.
Is that a sin? Nope. Did I beat myself up because of it? Nope.
As your project grows you discover how people are using your project and adapt. Actively listen to how people want to use your project.
For example, I’ve had people mention they don’t like the fact that Coravel is targeting .NET Core and not .NET Standard.
Originally, Coravel was designed only for .NET Core apps. One of the main principles was that Coravel needed to just “hook” into the native .NET Core tooling so that it was seamless and super easy to configure. Especially making things hook into the service provider, IHostedService
interface, etc.
However, slimming of some dependencies (due to some issues posted by others on GitHub) caused me to learn more about how .NET Core works. Due to those changes, switching to .NET Standard might actually make sense now. So I’m looking at investigating this as we speak.
And that’s OK. I may have gone the “wrong” route originally. But I’m still learning.
It’s OK to learn from your mistakes. That’s how we gain experience. Go back and make it “right”.
It’s been 4 months since I started Coravel. Have I been coding every night for 4 hours non-stop? Nope.
I’ve got 7 kids. We homeschool. I’m married. I’ve got a full-time job. I’ve got a life.
Do I have much free time? Nope.
The little bit of free time that I do get has to be allocated to a handful of “important” projects, such as this blog.
If I spend time working on this blog then it’s time that I won’t be able to work on other things - like Coravel.
It’s a juggling act for me. Probably for you too.
Having a long-term vision is what I need to ensure that I don’t get overwhelmed when I don’t get around to doing X for a few weeks.
And it’s what you’ll need if you want to build a lasting, useful and robust open source project.
Talking about time-suckers…
Pull-requests take a lot of time. Way more than I had ever expected.
I’m super excited that people are interested in contributing to something that I’ve built.
But open source doesn’t mean that anyone can change projects to be what they want. I have a vision and philosophy behind everything I create in Coravel. Others can’t read my mind.
It’s my job then to make sure all contributions adhere to that vision. This means (for me) addressing what may seem like super insignificant details - one-word changes to documentation, variable naming, etc.
But, to me, all the small details matter. It’s what makes Coravel unique. It’s what makes it easy to use.
So don’t feel bad when you aren’t just accepting pull-requests right-and-left. It’s still your project.
I hope you enjoyed the story behind Coravel so far and some things I’ve learned along the way.
I’d like to write another post soon that highlights the technical things I’ve learned - and get into some code!
I’ll be writing new content over at my new site/blog. Check it out!
async
and await
in C#? Why should a .NET developer in 2019 need to know what it is and how to use it?Perhaps you’ve used async/await
but find yourself having to go back and figure it out again?
I’ve had to figure out the hard way that, for example, using ConfigureAwait(false).GetResult()
doesn’t magically make your async method “work” synchronously.
But this isn’t an in-depth look at the internals of async/await
and how to get fancy with async/await
.
This is “Async/await for the rest of us”. Who are the rest of us?
We:
async/await
async/await
properly at times.Task
but not use the async
keyword.async
can be awaited by it’s callerThis - I hope - is an article for “the rest of us” that’s to the point and practical.
P.S. If you do want to dig into this topic more the best starting point is I’d suggest starting with this article.
What’s the benefit of using async/await
?
If you build web apps using .NET technologies - the answer is simple: Scalability.
When you make I/O calls - database queries, file reading, reading from HTTP requests, etc. - the thread that is handling the current HTTP request is just waiting.
That’s it. It’s just waiting for a result to come back from the operating system.
Performing a database query, for example, ultimately asks the operating system to connect to the database, send a message and get a message in return. But that is the OS making these requests - not your app.
Using
async/await
allows your .NET web apps to be able to handle more HTTP requests while waiting for IO to complete.
But desktop apps don’t handle HTTP requests…
Well, desktop apps do handle user input (keyboard, mouse, etc.), animations, etc. And there’s only one UI thread to do that.
If we consider that HTTP requests in a web app are just user input, and desktop (or app) keyboard and mouse events are just user input - then it’s actually worse for desktop/app developers! They only get one thread to handle user input!
The I/O issues and fixes still apply.
However, the issue of CPU intensive tasks is another concern. In a nutshell, these types of tasks should not be done on the UI thread.
The types of tasks would include:
If your app does this (on the main/UI thread), then there’s nothing to handle user input and UI stuff like animations, etc.
This leads to freezing and laggy apps.
The solution is to offload CPU intensive tasks to a background task/thread. This starts to get into queuing up new threads/tasks, how to use ConfigureAwait(false)
to keep asynchronous branches of your code on a non-UI context, etc. All things beyond the scope of our article.
Let’s start looking at the async/await
keywords and how they are to be used.
There’s confusion over the async keyword. Why? Because it looks like it makes your method asynchronous. But, it doesn’t.
That’s confusing. The async
keyword doesn’t make my method asynchronous? Yep.
All the async
keyword does is enable the await keyword. That’s it. That’s all. It does nothing else.
So, just think of the async
keyword as the enableAwait
keyword.
The await
keyword is where the magic happens. It basically says (to the reader of the code):
I (the thread) will make sure if something asynchronous happens under here, that I’ll go do something else (like handle HTTP requests). Some thread in the future will come back here once the asynchronous stuff is done.
Generally, the most common usage of await
is when you are doing I/O - like getting results from a database query or getting contents from a file.
When you await
a method that does I/O, it’s not your app that does the I/O - it’s ultimately the operating system. So your thread is just sitting there…waiting…
await
will tell the current thread to just go away and do something useful. Let the operating system and the .NET framework get another thread later - whenever it needs one.
Consider this as a visual guide:
var result1 = await SomeAsyncIO1(); // OS is doing I/O while thread will go do something else.// A thread gets the results.var result2 = await SomeAsyncIO2(result1); // Thread goes to do something else.// One comes back...await SomeAsyncIO3(result2); // Goes away again...// Comes back to finish the method.
You might ask yourself at this point:
If the
async
keyword doesn’t make a method asynchronous then what does?
Well - it’s not the async
keyword as we learned. Go figure.
Any method that returns an “awaitable” - Task
or Task<T>
can be awaited using the await
keyword.
There are actually other awaitables types. And, an “awaitable” method doesn’t strictly have to be an asynchronous method. But ignore that - this is meant to be for “the rest of us.”
For the purpose of this article, we’ll assume that an “asynchronous method” is a method that returns Task
or Task<T>
.
When will we ever need to return a Task
from a method? It’s usually when doing I/O. Most I/O libraries or built-in .NET APIs will have an “Async” version of a method.
For example, the SqlConnection
class has an Open
method that will begin the connection. But, it also has an OpenAsync
method. It also has an ExecuteNonQueryAsync
method.
public async Task IssueSqlCommandAsync() { using(var con = new SqlConnection(_connectionString)) { // Some code to create an sql command "sqlCommand" would be here... await con.OpenAsync(); return await sqlCommand.ExecuteNonQueryAsync(); }}
What makes the OpenAsync
and ExecuteNonQueryAsync
methods asynchronous is not the async
keyword, but it is that they return a Task
or Task<T>
.
It is possible to do something like this (notice the lack of async
and await
):
public Task GetSomeData() { return DoSomethingAsync();}
And then await
that method:
// Inside some other method....await GetSomeData();
GetSomeData
doesn’t await the call to DoSomethingAsync
- it just returns the Task
. Remember that await
doesn’t care if a method is using the async
keyword - it just requires that the method return a Task
.
It is possible to do this - create a method that calls an asynchronous method but doesn’t await.
However, this is considered a bad practice. Why?
Since this article is supposed to be to the point and practical:
Using async/await “all the way down” simply captures exceptions in asynchronous methods better.
If you mark every method that returns a Task
with the async
keyword - which in turn enables the await
keyword - it handles exceptions better and makes them understandable when looking at the exception’s message and stack trace.
To summarize briefly:
The async
keyword doesn’t make methods asynchronous - it simply enables the await
keyword.
If a method returns a Task
or Task<T>
then it can be used by the await
keyword to manage the asynchronous details of our code.
Doing I/O always results in blocking threads. This means your web apps can’t process as many HTTP requests in parallel and freezing and laggy apps.
Using async/await
helps us create code that will allow our threads to stop blocking and do useful work while performing I/O.
This leads to web apps that can handle more requests per second and apps that are more responsive for their users.
I hope this is an understandable introduction to async/await
. It’s not an easy topic - and as always - gaining experience by using this feature will, over time, help us to understand what’s going on.
There’s so much more to be said and so many more concepts surrounding async/await
. Some include:
SynchronizationContext
? When should I be aware of this?async void
? What does this do? Should I do this?async
keyword from synchronous code? What happens when I do this?I’ll be writing new content over at my new site/blog. Check it out!
SqlConnection
s? Bad stuff. Bad stuff that will inevitably bring your IIS website crumbling down to ashes. Well… something like that. We all know that we should take extra care to always close our DB connections. Right?But what happens when developers try to get fancy with their DB connections and focus more on being able to re-use open DB connections rather than being safe? Well, the consequences were bestowed upon me a few weeks ago.
Let me explain (briefly) what happened, what could happen to you, and a way to fix it - while maintaining the flexibility of re-using open DB connections and being able to safely use DB transactions from your C# code (using ADO).
In the codebase I work with for my day job, there are utilities for performing SQL queries (well… calling stored procs) in C#. When I first saw this code (many months ago) I immediately thought that it was not a good thing (and told the team my thoughts).
Why? Because mishandling DB connections lead to very bad things. Better to be safe than sorry. But, it wasn’t considered an issue - it never had caused any issues yet. It was working well so far. So, it wasn’t worth fussing over - other than making my concerns public.
The pattern in question is something like this:
SqlConnection
as a parameter.SqlConnection
is already opened. If it is, don’t close it since the caller wants to keep it open to issue further queries.SqlConnection
is closed, open it, then run the query, then close it.The code might look like this:
if (cmd.Connection.State != ConnectionState.Open){ cmd.Connection.Open();}// Do your stuff...if (origConnectionState == ConnectionState.Closed){ cmd.Connection.Close();}
And yes, there is no try block.
A few weeks ago, we had major issues where our site kept locking up for everyone. Long story short - the code was calling a stored procedure that didn’t exist, the DB call threw an error, and the calling code (just by chance!) silently caught the error and kept on chugging away.
Since the code was attempting to be fancy with the connections - when an error was thrown it by-passed the code that was supposed to close the connection. Normally, exceptions like this would be caught during development (and fixed).
But what happens when things aren’t thoroughly tested? And the caller is silently catching the error? Boom!
Users saw no error. Until all those un-closed connections piled up.
This took a full day to figure out where the issue was coming from since the exception was silently caught! Eventually, looking at IIS logs and the event viewer gave us some clues. I literally found and fixed the issue during my shift’s last ten minutes. Phew! What a scramble!
Why would anyone do this? Well, the reasoning is that the caller of this method might want to issue multiple queries. Instead of having to open and close multiple SqlConnection
s, we can have more performance by keeping one opened and closing it ourselves after all our queries are completed. Avoiding the extra re-connections to the database would be avoided.
In reality, there was not even one instance where a caller was re-using an open SqlConnection
in the entire codebase. :(
Is it true that this would boost performance though? Not really. By default, .NET ADO connections are pooled. This means that calling Close()
on your DB connection doesn’t really close it (physically).
.NET handles this under the covers and keeps a pool of active connections. Any performance benefit gained is most likely just the mere cost of instantiating a new c# object in memory and the cost of finding a connection in the pool. The actual hit of connecting to the database is most likely not gained.
Rule of thumb: Test real-world scenarios if performance is an issue. Then, test your optimizations under those same scenarios and compare.
There’s another issue here. Who is responsible for managing the “low-level” database access? In the code above - it’s:
So, essentially, you could have an MVC controller who is creating a SqlConnection
, passing it into the utility to perform a query, then closing the connection back inside the controller action.
But that never happens, right? Well, sometimes we inherit code that does do this. :(
This is like going to a mechanic’s shop, asking them to fix your car, but giving them your cheap tools to use to fix the vehicle.
The same is true with the code above. Lower-level database objects should always own that domain/responsibility. Someone else shouldn’t be telling them how to manage their database connections.
How could we safely re-use open SqlConnection
s if we wanted to?
In this same code base, I introduced using Dapper to replace database access moving forward. Dapper turned out to be more than 5 times faster than the hand-built mini-object mapper that was in place anyways. That’s another story for another day.
In conjunction with these changes, I had created a few methods that allowed us to use SqlConnection
s safely - but retaining the flexibility of connection reuse and the ability to use transactions.
In a past post I explained how you can use higher-order functions to encapsulate repetitive code while allowing the “meat” of your code to be passed in. What do I mean?
Take, for example, a Query
class that has the method UsingConnectionAsync
which might be used like this:
await _query.UsingConnectionAsync<int>( async con => { var result1 = await con.ExecuteAsync(sql1, parameters1); // ExecuteAsync comes from Dapper var result2 = await con.ExecuteAsync(sql2, parameters2); return PutTheResultsTogether(result1, result2); });
Such a method accepts a function that will be given a SqlConnection
. The function can use that SqlConnection
however it wants. Once done, the higher-order method UsingConnectionAsync
will manage to close the connection “magically” (in a manner of speaking).
We can wrap all our database related code inside of a “scope”, as it were, and just do what we want. It’s safe and yet flexible.
You could make another method UsingTransactionAsync
that would manage to open a SqlConnection
, start a transaction, and rollback on errors while committing successful usages.
See the section below for a gist containing one way to implement these methods.
Hopefully, this real-world crisis will encourage you to think about how your SqlConnection
objects are being managed. Especially in older “legacy” applications (ya, we all have to deal with them), you might see these kinds of attempts at optimizing database connections.
Just remember:
I’ll be writing new content over at my new site/blog. Check it out!
I want to bring you through some steps in typical pattern matching usage, and we’ll ask some fun questions and test this feature to see how far we can bring it!
The common usage is by issuing type checking within an if
statement.
object value = SomeFactory();if(value is long asLong){ // asLong is type <long>}else if(value is int asInt){ // asInt is type <int>}else if(value is string str){ // str is type <string>}
Fair enough. Let’s move onto the next common usage of pattern matching.
object value = SomeFactory();switch (value){ case long asLong: // asLong is type <long> break; case int asInt: // asInt is type <int> break; case string str: // str is type <string> break;}
Looks good. Very convenient and useful.
But, this is where my brain has some questions. The syntax [var] is [type] [newVar]
is interesting. It is not really a statement. It’s an expression. But… it’s also a statement. Why?
It’s an expression because it evaluated to a boolean
. But, it also makes an assignment to a new variable…
object value = SomeFactory();bool isLong = value is long asLong;bool isInt = value is int asInt;bool isString = value is string str;// ... etc.
This works. Each of these boolean
values is set properly. In the official docs, the is
operation is called the is
expression. Fair enough.
This means we might be able to do this?
object value = SomeFactory();bool isLong = value is long asLong;bool isInt = value is int asInt;bool isString = value is string str;// Doesn't compile. "asLong" etc. are all in scope, but are "unassigned" (so Visual Studio tells me)return isLong ? asLong : isInt ? asInt : isString ? str : new object();
Nope.
So the assignment to asLong
, asInt
and str
seem to be scoped to the outer level - but the variables are just never set. That’s what Visual Studio says. But, the official C# docs say:
If exp is true and
is
is used with an if statement,varname
is assigned and has local scope within the if statement only.
Alright. What if we did this?
object value = SomeFactory();return value is long asLong ? asLong : value is int asInt ? asInt : value is string str ? str : new object();
That works and is a succinct way of expressing what we wanted to do.
Did you notice though - what Visual Studio says and what the docs say is actually not perfectly in harmony? Visual Studio says that the variable (for example) asLong
is in scope - it just hasn’t been assigned. The docs say that when doing pattern matching in an if
statement the variable is only in scope within the if
statement only.
Let’s play around with this.
if (value is long asLong) value = asLong;else if (value is int asInt) value = asInt;else if (value is string str) value = str;asLong = 5; // This compiles! asLong is in scope!asInt = 5; // Fails! asInt is not in scope at all!str = ""; // Fails! str is not in scope at all!
Weird. asLong
is actually in the function / outer scope. But not the other variables. Must be the way the compiler chooses to modify the code.
Ok… let’s get super weird. Remember how in one of the code examples above we talked about the fact that the is
operation is an expression?
bool isLong = value is long asLong;bool isInt = value is int asInt;bool isString = value is string str;asLong = 5;asInt = 5;str = "";
Well, what do ya know! That works. Weird. Why?
Apparently, using the is
expression in a pure if
statement will declare the variable above the if
statement. So what really ends up in your code after compiling is something like this:
if(value is long asLong){}else if(value is int asInt){}// Really becomes something like this by the compiler.long asLong;if(value is long){ asLong = (long) value;}else if(value is int){ int asInt = (int) value;}// NOT this (like the docs say).if(value is long){ long asLong = (long) value;}else if(value is int){ int asInt = (int) value;}
Neat.
In one of the code samples above - where we are assigning the result of the is
expression to a boolean - the compiler must be treating each expression the same way it treats a pure if
statement. Each variable is pushed up. Like this:
bool isLong = value is long asLong;bool isInt = value is int asInt;bool isString = value is string str;// Must become...long asLong;bool isLong = value is long;int asInt;bool isInt = value is int;string str;bool isString = value is string;
Again, the official C# docs say (with bold added by me):
If exp is true and
is
is used with an if statement,varname
is assigned and has local scope within the if statement only.
If we removed the last word of that statement, then it would be true. The variable doesn’t have local scope only
within the if statement. But as it is, it’s technically false.
So - by being very technical and nit-picky - the docs are not totally 100% bang-on.
Does it matter? Not really. But, it’s fun to play around with this stuff.
Using the ternary operator in conjunction with the is
pattern matching is handy and very compact. But in C# version 8 we are expecting something that takes this to a whole new level!
Switch expressions are the “next level” and will look something like this:
object value = SomeFactory();value switch{ long asLong => // Do something with it!, int asInt => // Do something else!, string str => // Do another thing!};
Since this new usage of switch
is an expression, you can return the result of that block of code. This should be great for things like the strategy pattern, factory patterns, etc.
In trying to push this feature as far as I could, I wrote some really weird code to figure out how the compiled code really works.
object value = 5;if(value is int asInt){}else if(value is long asLong){ asInt = 101;}else{ asInt = 102;}Console.Write(asInt);
Yes, that compiles. The Console writes 5
!
Changing the first line to object value = 5L;
outputs 101
.
Changing it to object value = 'g';
outputs 102
.
This confirms our conclusions from the article above.
But, by using IL Spy, we can see the real code. This is what is really outputted:
object value = 5;int asInt;object obj;if ((obj = value) is int){ asInt = (int)obj;}else if ((obj = value) is long){ long num = (long)obj; asInt = 101;}else{ asInt = 102;}Console.Write((object)asInt);
Nice. If you follow closely (ya, it’s hard to follow - I know…), this is exactly what my entire article was concluding.
What about this?
object value = 5;bool isLong = value is long asLong;bool isInt = value is int asInt;Console.WriteLine(isLong == isInt);
It becomes:
object obj = 5;int num;object obj2;if ((obj2 = obj) is long){ long num3 = (long)obj2; num = 1;}else{ num = 0;}bool isLong = (byte)num != 0;obj2 = obj;int num2;if (obj is int){ int num4 = (int)obj2; num2 = 1;}else{ num2 = 0;}bool isInt = (byte)num2 != 0;Console.WriteLine(isLong == isInt);
What? That’s a mouthful. But, we can see that our conclusions about what it seemed like the compiler is doing are true.
Fun!
Hopefully, you learned something new! Let me know what you think!
P.S. Here are some other articles you might enjoy!
Don’t forget to connect with me on twitter or LinkedIn!
Continuing my “Refactoring Legacy Monoliths” series - I want to go over a few tools that I’ve found super helpful and worth investing in.
To make this blog post more useful than a list of products, I’ll go through some high-level steps that represent a way to tackle a refactoring project.
Getting an overview of where pain-points are in your code-base has to be done. Why? Well, how do you know for sure that some code ought to be refactored? Because you feel like it? Because of ugly code?
Unless you measure your code-base in the same way that you would measure - let’s say - the performance of a real application, you don’t really know where issues really are. You might have educated guesses about what needs to change. But not objectively quantifiable conclusions.
The best tool I’ve found to get this kind of objective view of your code-base is NDepend.
NDepend integrates right into Visual Studio and basically just adds it’s own menu etc. It can be run via an external .exe if you want to.
NDepend has so many features it’s almost overwhelming (which is a good thing). I’ll mention a couple features that I’ve found the most helpful in getting a high-level overview of our code-base.
This feature is called a Treemap diagram but reminds me of a Heat Map for your code. It colours specific areas of the diagram (modules in your code) according to their code quality (red is bad…). It reminds me of hard disk usage diagrams that show you what applications in your system are using lots of space, etc.
This is a super quick and reliable way to see where your investigation might want to start. Large red areas are immediate candidates for further investigation. And, most likely, represent some critical part of your app. Again, I said most likely.
In my professional experience, the large red areas almost always correspond to where most “bugs” in the software are eventually found.
If you are a consultant who deals with code-bases in any form, but more specifically in offering code-refactoring or code-analysis services, then I would say this feature is maybe worth the entire cost of the tool. It’s just so easy to get started - especially with huge code-bases.
Just like any great piece of software, NDepend has an awesome dashboard.
From here you can drill into “issues” which gives you a list, ordered by severity, of specific issues in your code-base. Taking note of the most common wide-spread issues give you an idea of the overall technical concerns you might have about the code-base.
Right away, in your face, is a “grade” of how much technical debt the overall code-base has accumulated. These metrics are customizable too, in the event you have certain measurements you feel are not needed to analyze.
This is a time-saver and gives you super-powers for gaining insight into how a code-base was built and what specific issues need to be dealt with. If you are new to a project, you can - within 15 minutes - have the ability to say to those experienced in that project:
“Hey [team-mate], I’ve had a quick look at the code to get a high-level overview. Let me take a guess and say that [module name] has probably been causing issues for you? Lots of bugs? Most of the team doesn’t like to work in that area?”
This can be a way of having others trust you quicker than they would have otherwise.
Roslyn analyzers are native C# code analyzers that you can use in Visual Studio. Visual Studio 2017 has the ability to run a code analysis on your entire solution and also has a feature that will calculate code metrics - similar to NDepend.
This can be an easy way to analyze your code-base if you are already using Visual Studio. The cool thing is that you can install different Roslyn analyzers into Visual Studio to enhance your intellisense, real-time suggestions, build time error checking, etc.
Running these metrics/analyzers will show up in your code (as a squiggly) and in the error list in Visual Studio. You can export the code metrics to an excel file, which is nice.
These features aren’t comparable to the power of NDepend, but I think these represent an area that Visual Studio has been lacking in. I’m excited to see if Microsoft will add some advanced reporting on the analyzers like charts, dashboards, etc.
Once you know where the problem areas are, I would suggest tackling the smaller red areas (from the Treemap). These represent areas that don’t have an overwhelming amount of code, yet do need to be refactored. These areas are probably known as “small” bug producers.
Think of the system you are working with as a “bug producing onion”.
In the center is the largest, most horrible, horrendous piece of the system known to frighten even the mightiest at heart. As we move closer to the outside of the onion we encounter error-prone but less intimidating and less crucial pieces of the system.
Let’s start by unravelling the onion from the outside.
This approach is less risky, allows you to build up confidence and domain knowledge (if working in a system you’re not so familiar with).
But how do you begin tackling these areas?
Using NDepend’s dashboard, you can see a list of specific code quality issues. Drilling into each issue will give you tips on how to fix those specific issues. I encourage you to explore this feature yourself :)
Moving onto another tool though, JetBrain’s ReSharper is fantastic. Just like NDepend, ReSharper has tons of features. It integrates into Visual Studio seamlessly. It is known to be resource intensive on larger project/solutions, but for the sake of refactoring code, it’s invaluable.
This was an issue I had specifically on a project I was refactoring. There was one file which had hundreds of different types (classes, interfaces, etc.) that were in one file named “WIP.cs”. That’s short for “Work In Progress”. And yes, this was in production code. Some types were actively used. Some weren’t. And, some had version numbers appended to them - like “Class2” and “Class3”. Fun.
So, instead of going through each of the types one-by-one and extracting them into their own files, ReSharper has a command that you can run on this file that will just magically do that for you. Just that saved me days of work - not to mention my sanity.
I now had a few hundred extra files to deal with - all in the same folder. Next, I needed to structure the files as best as I could with a proper file/folder structure. This was manual work - but once all the files are in their proper place we still need to adjust all the namespaces for each type!
ReSharper has that covered. Just run the “Adjust Namespaces” command on a specific project, file or the entire solution and the folder structure for each file will be automatically applied to the namespace of the file. Again, that saved me at least a few days of work.
This feature shines when doing high-level re-organization refactoring (i.e. file structure and folder structure changes).
From this real-life example, I was able to perform what otherwise would have taken weeks of work and get it done in a few days. Win!
After you refactor your code and have some unit tests in place (yes, you really should…), I always like to profile run-time performance of the code that was changed. This next tool is one I always have running all the time. As a web developer (primarily) this tool is non-negotiable.
Prefix is a free real-time web profiler. Basically, you install it on your machine and then hit a local URL that Prefix is assigned to. Voila!
It will automatically show you every HTTP request that your machine is handling with tons of details about each request. It’s not a generic performance profiler - Prefix will show you specific details about each request. Features I’ve come to love are:
It will show you hidden exceptions you never knew were happening (and are slowing your app down)
All database calls are displayed with the actual code that was executed, how long the query took and even how long just downloading the data from the SQL server took (helpful with large data sets!)
A code “hot-path” shows you which methods in your code are taking up the most time to execute (with the actual metrics displayed)
To recap, NDepend, ReSharper, and Stackify’s Prefix are all fantastic tools that boost your refactoring capabilities and ability to comprehend code from a high-level. They also give us tooling that will assist in the nitty-gritty details of improving our code.
Visual Studio code analysis tools offer an area of great potential. Roslyn analyzers, in particular, are an area where I see future potential integration into charts, dashboarding, etc. within Visual Studio to be a useful addition.
Don’t forget to connect with me on twitter or LinkedIn!
Before doing any actual changes or refactoring to your product, planning a refactor is your next step. In other words, you need a game plan. I’ll also discuss some refactoring tips for you to get started!
P.s. This is part 3 of my “Refactoring Legacy Monoliths” series:
Refactoring Legacy Monoliths - Part 1: First Steps
Refactoring Legacy Monoliths - Part 2: Cost-Benefit Analysis Of Refactoring
One comment I’ve seen come up on Reddit about this series (quite a bit…) is the accusation that I’m suggesting you ought to rewrite your entire codebase. I’ve never said that. Nor have I ever implied that you should. Really, you probably shouldn’t.
As a refresher, in his book Working Effectively With Legacy Code, Michael states:
To me, legacy code is simply code without tests.
Why does Michael care so much about testing your code from the inside? (i.e. not by having people test your website over and over - which is really expensive btw). There’s a simple question that can answer this:
If you were to change a core feature of your product in a non-trivial way, would you feel confident about making that change? Would you trust the system to still act exactly as it did?
If you don’t have any testing, then how can you be confident? How can you trust your system?
Having tests in place is like doing acrobatics in the air with a safety net vs. not having a safety net. Ouch!
Your first goal should be to start implementing unit tests on your code. This is foundational work. You need to be able to change your code and have confidence that it still works.
Again:
Your first goal should be to implement code-based testing. You cannot refactor your system with confidence unless you have a safety net.
After this, your goals may vary. If you have completed Step 1 and Step 2 then you should have a solid list of what needs to change.
What I would suggest at this point is having a formal discussion (with a formal outcome/document) that answers the following question:
What do we want our system to look like in 1 year? In 2 years?
Maybe we should be using a new technology stack like ASP .NET Core? Maybe our current code architecture does not allow us to re-use our business rules in other platforms (web vs. mobile API vs. desktop app)? This would imply that we need to consolidate our business logic and rules. (P.s. None of these cases require a re-write)
The number one obstacle that (most likely) prevents you from creating isolated unit tests and isolating your business rules and entities are dependencies.
Once you start, you find that you start telling yourself:
Well, in order to test [thing 1] I now need to have an instance of [thing 2]. But, [thing 2] needs an instance of [thing 3].
“Thing 1” might be an entity you want to test - let’s say, a Report
entity (which models some tabular data).
Now, imagine that “Thing 2” is another class - LinkGenerator
(which generates links for the report).
LinkGenerator
needs access to “Thing 3”, which is, the HttpSession
.
If you want to unit test the Report
entity, you need:
LinkGenerator
which needs…HttpSession
Uh Oh. How can you unit test when you need HttpSession
? Unit tests don’t run off a web server! (Well, they shouldn’t…)
Sorry to say (you already know…), it’s going to take some work. You need to break the chain of dependencies.
Fortunately for us, that’s one of the primary goals of refactoring. Others have already done the hard lifting for us.
Let’s look at a couple dependency breaking refactoring tips.
The title says it all. Sticking with our little example, imagine the LinkGenerator
has the following method (pseudo-ish code).
public string GenerateLink(){ // ... some kind of processing var someValue = HttpSession["SomeKey"]; // ... more processing var someOtherValue = HttpSession["SomeOtherKey"]; return link;}
We can’t test this method because it references the HttpSession
object that only exists in a web application. We don’t want our models or business entities to know about the web (this is in line with our goal of isolating business entities from the presentation of our data).
By injecting an interface instead, we can remove the dependency on the actual HttpSession
.
public string GenerateLink(IHttpSessionAccessor session){ // ... some kind of processing var someValue = session.GetValue("SomeKey"); // ... more processing var someOtherValue = session.GetValue("SomeOtherKey"); return link;}
I’m sure you can imagine what the interface definition would look like. The concrete class might look something like this:
public class HttpSessionAccessor : IHttpSessionAccessor{ private readonly HttpSession _session; public HttpSessionAccessor(HttpSession session) { this._session = session; } // You could be fancy and use generics? public object GetValue(string key) { return this._session[key]; }}
Now, we can do something like this in our testing code:
IHttpSessionAccessor session = new Mock();// Implement the mock// Or just assign "session" with a dummy implementation of IHttpSessionAccessor.LinkGenerator generator = new LinkGenerator();string link = generator.GenerateLink(session);// Assert ...
Now we can build tests around the LinkGenerator
and have confidence that:
Imagine our code above was originally this:
public string GenerateLink(HttpSession session){ // ... some kind of processing var someValue = session["SomeKey"]; // ... more processing var someOtherValue = session["SomeOtherKey"]; return link;}
What’s wrong? We have the same issue as above. We still need an instance of HttpSession
. Which means… we need a web server to be running. Bad.
To solve this, just do the same thing as #1. Turn the parameter into an interface and access the interface instead of the actual implementation (HttpSession).
You are probably familiar with this technique. If you have a section of code that is doing multiple chunks or has multiple responsibilities, then you need to break them up. Take a chunk of code that does one thing, and create a new method out of it. Avoid references to global state (like HttpSession
) so that you can unit test your new methods.
Good indicators of where to break up your code are:
The primary areas you need to focus on are:
Dependencies will need to be broken. But this ultimately leads you to a place where:
Thanks for reading :) Let me know what you think in the comments. Have you ever had to go through this process personally? Let me know!
Don’t forget to connect with me on twitter or LinkedIn!