Categories
Azure Computers and Internet

.NET Azure Functions – Isolated Process Update

Isolation - good for Azure Functions, bad for people
Isolation – good for Azure Functions, bad for people

A short while ago I posted a summary of the current state of play of Azure Functions and .NET 5. In short, to run your function in .NET 5 you need to use the new Isolated Process. It’s so new that it’s missing a lot of the Azure Functions features, e.g. several bindings and Durable Functions. So Durable Functions users are stuck on .NET Core 3.1 until .NET 6 is supported in the In-process version.

Whilst all that is still true, there is now an update from the team on where they’re intending to go in future. The In-process version will end with the .NET 6 release and development will concentrate on bringing the Isolated Process up to feature parity in time for .NET 7. Read their post here. After that they are promising to support .NET versions as and when they are released.

This is best illustrated by reposting their roadmap from that link:

Azure Functions Roadmap

The Durable Functions support in the Isolated Process is said to arrive in “2022 or possibly earlier”. I look forward to it.

Advertisement
Categories
Azure

Azure Functions & .NET 5 – State of Play

.NET 5 was released on November 10th 2020 and contains features (specifically C#9 support and thus Record Types) that we were keen to use in our products at Matchnet.

See the source imageAs we’re heavily based on Azure Functions, I was happy to see an announcement of a preview of Azure Functions on .NET 5. We went ahead and evaluated it, and here’s what we found:

  • No support for rich function types such as Durable Functions.
  • Visual Studio support is not there yet, but promised soon
  • HTTP trigger interface uses simplistic HTTP message abstraction (HttpMessageData instead of HttpRequest)

Durable Functions

This was the most disappointing issue, as we’re using Durable Functions in a few places. They’re a useful implementation of a saga/orchestration.

The implementation of the .NET 5 Azure Functions is based around “an out-of-process model where a .NET worker process runs alongside the runtime”. On the face of it this sounds good, as it decouples us from the runtime’s version support. However, it means we also don’t have access to the extension code running in the host, which is what Durable Functions relies on. See this GitHub issue comment.

The team will be updating the host to the .NET 6 LTS, which will mean .NET based functions being deployed in the conventional way, directly to the host and not out of process, so Durable Functions should work fine. Roll on November 2021.

Visual Studio

As of Visual Studio 16.8.4 there isn’t a project template for the out-of-process .NET 5 functions. Instead you create an ASP.NET Core project and build around that. The readme on the GitHub preview site has the details.

We had trouble getting local debugging running. The only way that seemed successful was to start the program then attach to the “dotnet.exe” process, if you can find the right one. It’s a bit of a hassle but I expect that will get sorted out with official Visual Studio support.

HTTP Trigger’s HttpMessageData

This is the class used to convey the payload for an HTTP trigger. It’s a wrapper on top of a RpcHttp class, part of a .NET implementation of gRPC. As such it has a very simple interface and doesn’t provide for the full suite of normal HTTP capabilities including cookies and attaching files, both of which are possible in the standard Azure Functions HTTP trigger via the HttpRequest class.

Conclusion

We’re going to skip it for now, though we’ll keep an eye on where it goes. It doesn’t sound to me like the out-of-process model is where the function team’s heart is at right now for .NET functions, so I hold out more hope for the upgrade of the host to .NET 6.

Categories
Computers and Internet

Referencing a .NET assembly in a compile time safe manner

If you need to provide a System.Reflection.Assembly instance to an API [1], there are several mechanisms for doing so. They roughly split into two camps:

  • Run-time assembly loading
  • Assemblies known at compile time

The run-time assembly loading includes scenarios such as having a plug-in architecture where the code being referenced cannot be known at the time of compilation.

For the other camp, if we know exactly which assembly we need to reference at compile time we have a couple of options. We can use the name of the assembly as a string like so:

Assembly.Load(“MyCompany.Util”);

(Note that if the assembly is already loaded the runtime will just return the loaded instance of that assembly and won’t attempt to load it again.)

Alternatively we can use a type from that assembly like so:

Assembly.GetAssembly(typeof(MyCompany.Util.AnyOldClass);

The problem with the assembly name string approach is that there is no compile time checking. The typeof approach allows for compile time checking but introduces an artificial dependency in the calling code on a class that it only needs for the purposes of getting the assembly. This calling code is then subject to any renaming or removal of that class when in reality it cares only about the assembly and not the type.

The solution I’ve gone for is to create a static, empty class with a similar name to the assembly in the root of the default namespace of the assembly I wish to reference and use this in the typeof:

using MyCompany.Util;
/* … */
Assembly.GetAssembly(typeof(MyCompanyUtil));

This provides us with a compiler error if the assembly reference is dropped or the assembly is renamed. It will take part in any necessary refactoring operations and is not dependent on irrelevant types.

 

[1] Examples include Autofac’s MVC and WebApi integration: ContainerBuilder.RegisterControllers & ContainerBuilder.RegisterApiControllers

Categories
Computers and Internet

Entity Framework Performance Tip for Creating Entities

This tip is applicable if you’re using Entity Framework Code First with dynamic proxies and you have a lot of objects attached to your context, for whatever reason (e.g. within a batch job).

The first thing to note is that if you have a lot of objects attached to your context you want to avoid DetectChanges being called on the context unless absolutely necessary. DetectChanges compares the original to the current state of each object and uses this information for a couple of purposes: Marking entities as added/changed/deleted and fixing up relationships such as bi-directional navigation properties and foreign key columns.

Arthur Vickers has an excellent blog series explaining this all very well: http://blog.oneunicorn.com/2012/03/10/secrets-of-detectchanges-part-1-what-does-detectchanges-do/

DetectChanges is obviously necessary when SaveChanges is called, but it’s also called whenever one of these operations is called:

  • DbSet.Find
  • DbSet.Local
  • DbSet.Remove
  • DbSet.Add
  • DbSet.Attach
  • DbContext.GetValidationErrors
  • DbContext.Entry
  • DbChangeTracker.Entries

DetectChanges calls can be avoided, though, by turning AutoDetectChanges off. Check out this gist:

public sealed class NoChangeTracking : IDisposable
{
private readonly DbContext _dbContext;
private readonly bool _initialAutoDetectChangesValue;
public NoChangeTracking(DbContext dbContext)
{
if (dbContext == null) throw new ArgumentNullException("dbContext");
_dbContext = dbContext;
_initialAutoDetectChangesValue = dbContext.Configuration.AutoDetectChangesEnabled;
SetChangeDetection(false);
}
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Design", "CA1063:ImplementIDisposableCorrectly")]
public void Dispose()
{
SetChangeDetection(_initialAutoDetectChangesValue);
}
private void SetChangeDetection(bool setting)
{
_dbContext.Configuration.AutoDetectChangesEnabled = setting;
}
}

With this class you can write code such as:


using(new NoChangeTracking(context))
{
  context.MyEntities.Add(new MyEntity());
}

… and DetectChanges will not be called. (You could even just turn off automatic detect changes globally, but you would at least need to remember to call DetectChanges manually before SaveChanges was called).

This technique works okay, but it can result in problems if you are relying on two way navigation properties. For example:


var parent = new Parent();
var child = new Child();
parent.Children.Add(child);
using(new NoChangeTracking(context))
{
  context.Parents.Add(parent);
}
Debug.Write(child.Parent.Id); // Null reference exception

The child.Parent navigation property will not have been set as we set AutoDetectChangesEnabled to false before we performed the DbSet.Add. We could choose not to turn it off, but that would lead again to the performance issues. We could also explicitly alter both the parent and child navigation properties each time we change one end, but that’s extra code and it’s easy to forget to do.

With dynamic proxies enabled, there’s an easier way. Instead of creating the entities by using the new operator, you create a dynamic proxy by using the DbSet.Create method. This dynamic proxy contains code to intercept alterations to each navigation property and ensure that any reciprocal navigation property on the target object is updated. E.g. when parent.Children.Add(child) is called, the child.Parent property is automatically populated.

Here’s that code again but with the correct proxy initialization:


var parent = context.Parents.Create();
var child = context.Children.Create();
parent.Children.Add(child);
using(new NoChangeTracking(context))
{
  context.Parents.Add(parent);
}
Debug.Write(child.Parent.Id); // No null reference!

That’s it. There are many other performance considerations, but combining switching off AutoDetectChangedEnabled with properly using dynamic proxies can get us a long way.

Categories
Computers and Internet laZook

Tightening Injected Dependencies on Entity Framework

Dependency injection as a pattern provides a lot of useful nudges to get you to produce easily readable and maintainable code. One way in which it does this is to make dependencies explicit so you can see exactly what services a class requires. When using Entity Framework most people are passing the whole context through as a dependency. This post explores an alternative to this approach that provides more clarity of the client code’s use of the context.

We’ve been coding with Entity Framework here at laZook for a while now, using the code first workflow. We’re using Autofac as our dependency injection framework. We inject dependencies into the constructor so that there is one clear place to view a class’s dependencies.

We used to inject the whole DbContext derived class into each type that needed to do anything with the context, e.g. add entities or save changes. This was fairly easy to do, but lead to some confusion. Let’s look at an example program using this technique:

public class Coordinator
{
private readonly IMyContext _myContext;
private readonly WidgetGenerator _widgetGenerator;
private readonly WotsitGenerator _wotsitGenerator;
public Coordinator(
IMyContext myContext,
WidgetGenerator widgetGenerator,
WotsitGenerator wotsitGenerator)
{
_myContext = myContext;
_widgetGenerator = widgetGenerator;
_wotsitGenerator = wotsitGenerator;
}
public void DoStuff()
{
_widgetGenerator.GenerateWidgets();
_wotsitGenerator.GenerateWotsits();
_myContext.SaveChanges();
}
}
public class WotsitGenerator
{
private readonly IMyContext _myContext;
public WotsitGenerator(IMyContext myContext)
{
_myContext = myContext;
}
public void GenerateWotsits()
{
if (DateTime.Now.DayOfWeek == DayOfWeek.Friday)
{
throw new Exception("No wotsit generation on Fridays!");
}
_myContext.Wotsits.Add(new Wotsit());
}
}
public class WidgetGenerator
{
private readonly IMyContext _myContext;
public WidgetGenerator(IMyContext myContext)
{
_myContext = myContext;
}
public void GenerateWidgets()
{
_myContext.Widgets.Add(new Widget());
_myContext.SaveChanges();
}
}
public class MyContext : DbContext, IMyContext
{
public IDbSet<Widget> Widgets { get; set; }
public IDbSet<Wotsit> Wotsits { get; set; }
}
public interface IMyContext
{
IDbSet<Widget> Widgets { get; set; }
IDbSet<Wotsit> Wotsits { get; set; }
int SaveChanges();
}
view raw MyContext.cs hosted with ❤ by GitHub
class Program
{
static void Main(string[] args)
{
using (var container = CreateContainer())
{
var coordinator = container.Resolve<Coordinator>();
coordinator.DoStuff();
}
}
private static IContainer CreateContainer()
{
var containerBuilder = new ContainerBuilder();
containerBuilder.RegisterType<MyContext>().AsImplementedInterfaces().InstancePerLifetimeScope();
containerBuilder.RegisterType<Coordinator>();
containerBuilder.RegisterType<WotsitGenerator>();
containerBuilder.RegisterType<WidgetGenerator>();
return containerBuilder.Build();
}
}
view raw Program.cs hosted with ❤ by GitHub

In this simple example we saw that the Coordinator class was calling upon a couple of worker classes and then persisting any changes. The worker classes were adding the entities to their respective DbSets.

There is a problem with the code, though. If it’s a Friday, no Wotsits will be made. The code will exit due to the exception. If you were looking only at the Coordinator and WotsitGenerator code, you’d be forgiven for thinking that there was a single unit of work and it would not be committed. It looks like the Coordinator is responsible for the SaveChanges call. However, a closer look at the WidgetGenerator reveals a call to SaveChanges after it has created a widget.

It’s a simple example, but where SaveChanges is buried in larger code it can be difficult to work out what is being committed and what isn’t.

What to do about this? One answer is to ensure that SaveChanges is only ever called at the very top level as the last action before the end of the program (in this example) or page request / job execution / button click handler. This works, but is somewhat limiting. What if you want to perform multiple SaveChanges to checkpoint during a long running operation? What if the success or failure of one SaveChanges determines whether or not another unit of work is embarked upon?

We need to make it clear who owns the responsibility for initiating completion of the unit of work.

The solution we’ve come up with is to create an ICompleteUnitOfWork interface that contains the SaveChanges method and have the context implement this interface. This interface is then declared as a dependency for the class that has the responsibility of calling SaveChanges. This allows us to glance at a class constructor and see whether that class owns the responsibility for completing the unit of work. Elsewhere we inject IDbSet<TEntity> instances. This helps us see which entities (or at least which aggregate roots) a class is involved in reading or editing.

Here’s the same code with the new dependencies and the errant SaveChanges in WidgetGenerator removed. We can clearly tell that WidgetGenerator does not call SaveChanges by seeing that it only takes a dependency on IDbSet<Widget>.

public class Coordinator
{
private readonly ICompleteUnitOfWork _unitOfWorkCompleter;
private readonly WidgetGenerator _widgetGenerator;
private readonly WotsitGenerator _wotsitGenerator;
public Coordinator(
ICompleteUnitOfWork unitOfWorkCompleter,
WidgetGenerator widgetGenerator,
WotsitGenerator wotsitGenerator)
{
_unitOfWorkCompleter = unitOfWorkCompleter;
_widgetGenerator = widgetGenerator;
_wotsitGenerator = wotsitGenerator;
}
public void DoStuff()
{
_widgetGenerator.GenerateWidgets();
_wotsitGenerator.GenerateWotsits();
_unitOfWorkCompleter.SaveChanges();
}
}
public class WotsitGenerator
{
private readonly IDbSet<Wotsit> _wotsitDbSet;
public WotsitGenerator(IDbSet<Wotsit> wotsitDbSet)
{
_wotsitDbSet = wotsitDbSet;
}
public void GenerateWotsits()
{
if (DateTime.Now.DayOfWeek == DayOfWeek.Friday)
{
throw new Exception("No wotsit generation on Fridays!");
}
_wotsitDbSet.Add(new Wotsit());
}
}
public class WidgetGenerator
{
private readonly IDbSet<Widget> _widgetDbSet;
public WidgetGenerator(IDbSet<Widget> widgetDbSet)
{
_widgetDbSet = widgetDbSet;
}
public void GenerateWidgets()
{
_widgetDbSet.Add(new Widget());
}
}
public class MyContext : DbContext, ICompleteUnitOfWork
{
public IDbSet<Widget> Widgets { get; set; }
public IDbSet<Wotsit> Wotsits { get; set; }
}
public interface ICompleteUnitOfWork
{
int SaveChanges();
}
view raw MyContext.cs hosted with ❤ by GitHub
class Program
{
static void Main(string[] args)
{
using (var container = CreateContainer())
{
var coordinator = container.Resolve<Coordinator>();
coordinator.DoStuff();
}
}
private static IContainer CreateContainer()
{
var containerBuilder = new ContainerBuilder();
containerBuilder.RegisterType<MyContext>().AsSelf().AsImplementedInterfaces().InstancePerLifetimeScope();
containerBuilder.Register(c => c.Resolve<MyContext>().Widgets);
containerBuilder.Register(c => c.Resolve<MyContext>().Wotsits);
containerBuilder.RegisterType<Coordinator>();
containerBuilder.RegisterType<WotsitGenerator>();
containerBuilder.RegisterType<WidgetGenerator>();
return containerBuilder.Build();
}
}
view raw Program.cs hosted with ❤ by GitHub

What are the problems with this approach?

There are some usage patterns of Entity Framework that it doesn’t support too well, but it can be extended to do so. For example, there is no way to get at the DbContext.Entry method for attaching objects and setting their state. You could introduce another interface for this, IManageUnitOfWorkObjectState, but it feels clunky.

Also, injecting the IDbSets is a good first step, but I actually prefer creating some repositories on top of the IDbSets as it better allows for caching and encapsulation of common queries.

I’m interested in any development suggestions or criticisms of the ideas. Let me know here or on Twitter.

Categories
Computers and Internet laZook Personal

laZook Microstore

Six months ago I joined a team working on a new start-up called laZook. It’s an online distributor, providing brands with a single route to many existing eCommerce channels and some that we’re helping to develop from scratch. We handle the listing, fulfilment and payment for these brands’ products.

We have various features at varying levels of maturity. One of those is a “microstore” that provides a drop down checkout on a website. This is in use already on several blogs and magazine sites. It needs more work but, as it stands, it provides a publisher with the choice of thousands of products to sell on their site. They insert a javascript snippet on their page and their site is now eCommerce enabled. For each purchase they receive commission, much like in a traditional affiliate scheme. One advantage here is that the purchaser is not taken off site for the checkout flow.

We’re currently using PayPal for the payment processing, but they don’t seem to have moved with the times very much and do force us into some poor user experience during checkout. As a result we’ll likely be moving to Stripe at some point soon. They have some modern APIs and can offer a lot more control over the experience.

If you want to see the Microstore in action (even if it is a little rough around the edges), check out the Ex Cellar Wine Club blog. Please do give us some feedback on what you think!

Categories
Computers and Internet

Quartz.net persistent job store LAST_MODIFIED_TIME issue

I’m playing around with Quartz.net and adding support for a persistent job store via the ADO.NET Job Store. As per the recommendation, I’m instructing the job store to persist job parameters in plain text rather than BLOBs, using the configuration:

<add key="quartz.jobStore.useProperties" value="true"/>

Unfortunately in triggering a simple job, which has no explicit job data map, I receive this error:

JobDataMap values must be Strings when the 'useProperties' property is set. Key of offending value: LAST_MODIFIED_TIME

When looking in the debugger at the JobDataMap object provided to the job I scheduled, there is no LAST_MODIFIED_TIME present. Digging a bit deeper, it seems that there is another job running called FileScanJob, scheduled by the XMLSchedulingDataProcessorPlugin (used to read the job and trigger configuration from an XML file). This job adds the LAST_MODIFIED_TIME entry to its JobDataMap during job execution, which is of type DateTime rather than string.

Why is this raising an exception? That is to do with the implementation of the StdAdoDelegate class. When the quartz.jobStore.useProperties configuration value is set to true it will deliberately fail to write to the job store database any job data that does not use string for both key and value. Despite this restriction, it still uses binary serialization to store the data after this check (in the form of a NameValueCollection).

To come back to the original reason for setting this property, the tutorial advises to use it to avoid serializing complex types and getting into versioning issues after type upgrades. I’d contest that this objective could be simply achieved by supporting all the primitive .NET types whose serialization is not likely to change. The change to StdAdoDelegate would be to perform a type validation of each name/value pair to ensure only simple types are in use in the case of quartz.jobStore.useProperties being true and convert it to System.Collections.Hashtable to allow for changes to the JobDataMap class to be made in Quartz without causing serialization issues.

Another solution could be to have an XmlAdoDelegate that used XML serialization instead of binary serialization.

Maybe I am missing some extra design constraint here. I’ve posted the issue to GitHub (here) to see if my thoughts can be easily shot down.

Categories
Computers and Internet SQL Server

Installing Balanced Data Distributor on SQL Server 2008 R2 SP1

Edit: This post refers to an older version of the BDD installer. As per JasonH’s comment below, Microsoft has released a new installation package which should hopefully fix the installation bug. It can be found here: http://www.microsoft.com/en-us/download/details.aspx?id=4123

Original post:

Microsoft’s Balanced Data Distributor does not install on top of SQL Server 2008 R2 SP1. It installs fine without SP1 but otherwise comes up with the error:

“The installation is not successful. Check the following prerequisites: 1. Either Integration Services or BIDS has to be installed. 2. The version of these components has to be either SQL Server 2008 SP2 (or future SPs) or SQL Server 2008 R2 (or future SPs)”

In my case all the pre-requisites were met. As per this thread I examined the registry keys it was checking for the version numbers using Process Monitor. I then modified the keys to pretend I was running SQL Server 2008 R2 RTM, ran the BDD installer again (successfully) and modified the keys back to their original values.

Warning: This is not best practice advice! If you do the same as I did and your production system is rendered unusable, this will be entirely your fault. I did this on a throwaway development environment to save time uninstalling SP1 and reinstalling it.

The keys I altered were all in the following path:

  • HKLM\SOFTWARE\Microsoft\Microsoft SQL Server\100\

The specific keys and the values I set them to were:

  • DTS\Setup\SP = 0
  • DTS\Setup\Version = 10.50.1600.1
  • BIDS\Setup\SP = 0
  • BIDS\Setup\Version = 10.50.1600.1

When setting up a production system, please ensure you apply the BDD installation before SP1. Don’t use this technique, which will probably render your environment unsupportable!

Categories
Computers and Internet

Deploying ASP.NET MVC 3 Razor – Missing System.Web.Helpers (and others)

Last week I was racing against time to complete a QA deployment of an ASP.NET MVC 3 application that uses the Razor view engine.

The deployment was to a clean Windows Server 2008 R2 box. I had already completed the following steps:

  • Used the Web Platform Installer to install “IIS 7 Recommended Configuration”.
  • Altered my TFS 2010 Build Definition and used the WebDeploy tool to create a deployment package and install it on the server.
  • Changed the “copy local” flag to true on the System.Web.Mvc assembly reference in my web project (as suggested in several pre-Razor articles about BIN deployment of ASP.NET MVC sites)
  • Ensured I was using a .NET 4 application pool.

Once all this was done and I browsed to the home page of the application, I started to get a series of errors about missing DLLs, starting with System.Web.Helpers. Additional missing assemblies included:

  • Microsoft.Web.Infrastructure
  • System.Web.Razor
  • System.Web.WebPages.Administration
  • System.Web.WebPages.Deployment
  • System.Web.WebPages.Razor

The solution compiled fine and ran okay on all the development machines. It turned out that these DLLs are needed for Razor based web pages and are a requirement over and above the standard ASP.NET MVC references.

My initial solution was to locate these assemblies and add references to them from the project with “copy local” as “true”. The assemblies are in the following folder on development machines:

C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies

However, it was, as I say, a series of errors as there were plenty of DLLs that needed to be added. On further searching it turned out that there is support in Visual Studio 2010 SP1 for including all of these dependent assemblies in the deployment without the need to add references to them.

AddDeployableDependenciesSimply right click on the web project in solution explorer and select “Add Deployable Dependencies” then select “ASP.NET Web Pages with Razor Syntax”. This adds the files to a “_bin_deployableAssemblies” folder in the web project. The contents of this folder are added to the web deployment package. Note that you can also check “ASP.NET MVC” so that you don’t have to remember to set “copy local” to “true” for the System.Web.Mvc assembly.

NOTE: You may have to remove the WebMatrix DLLs, added by this step, from the _bin_deployableAssemblies folder as they have the detrimental effect of redirecting the user to “Account/Logon” on every page request in some instances, regardless of the settings in your own web.config file. See this StackOverflow answer and this Microsoft Connect issue for more information.

The “Add Deployable Dependencies” is covered in more detail by Phil Haack in this blog post:

http://haacked.com/archive/2011/05/25/bin-deploying-asp-net-mvc-3.aspx

As many of the comments at the end of his blog point out, this does come across as a hack. One alternative would be to have some kind standalone installer for ASP.NET MVC 3 with Razor and to run this on the target server. Another alternative would be to add another option in the Web Platform Installer. Either of these would have worked great in my situation, but not when trying to deploy in a more locked down environment.

Categories
Uncategorized

Syncing Visual Studio Database Project With Entity Framework

Or: Quirks of Using Entity Designer Database Generation Power Pack

If you’re searching for a way to synchronise a “Visual Studio Database Project” (also known as “SQL Server Database Project” or “TSData Project”) with an Entity Framework model, you’ve probably already encountered the “Entity Designer Database Generation Power Pack”.

With model first development, the most basic use case is to synchronise the database project with the model. If you are trying to achieve this, you need to be aware of conventions that have been baked into the tool:

  1. Your “.edmx” file must have the same name as the “.dbproj” (i.e. database project name).
    • It is just the file name of the model that is important.
    • Neither the “Namespace” or “Entity Container Name” properties of the model need to match the file name. In fact, you will not be able to use names containing full-stops/periods (“.”) in these properties, but you may well require that for the file name if you’ve named your database project in this fashion.
    • It is just the file name of the database project that is important too. The name of the
  2. Your database project must lie in the root of the solution structure. i.e. do not put your database project in a solution folder.

Footnote

There won’t be any more development of the “Entity Designer Database Generation Power Pack”, as the functionality is being provided as part of the “SQL Server Developer Tools”, code named Juneau. This would be the better toolset to use to satisfy this use case.