Isolation – good for Azure Functions, bad for people
A short while ago I posted a summary of the current state of play of Azure Functions and .NET 5. In short, to run your function in .NET 5 you need to use the new Isolated Process. It’s so new that it’s missing a lot of the Azure Functions features, e.g. several bindings and Durable Functions. So Durable Functions users are stuck on .NET Core 3.1 until .NET 6 is supported in the In-process version.
Whilst all that is still true, there is now an update from the team on where they’re intending to go in future. The In-process version will end with the .NET 6 release and development will concentrate on bringing the Isolated Process up to feature parity in time for .NET 7. Read their post here. After that they are promising to support .NET versions as and when they are released.
This is best illustrated by reposting their roadmap from that link:
The Durable Functions support in the Isolated Process is said to arrive in “2022 or possibly earlier”. I look forward to it.
If you need to provide a System.Reflection.Assembly instance to an API [1], there are several mechanisms for doing so. They roughly split into two camps:
Run-time assembly loading
Assemblies known at compile time
The run-time assembly loading includes scenarios such as having a plug-in architecture where the code being referenced cannot be known at the time of compilation.
For the other camp, if we know exactly which assembly we need to reference at compile time we have a couple of options. We can use the name of the assembly as a string like so:
Assembly.Load(“MyCompany.Util”);
(Note that if the assembly is already loaded the runtime will just return the loaded instance of that assembly and won’t attempt to load it again.)
Alternatively we can use a type from that assembly like so:
The problem with the assembly name string approach is that there is no compile time checking. The typeof approach allows for compile time checking but introduces an artificial dependency in the calling code on a class that it only needs for the purposes of getting the assembly. This calling code is then subject to any renaming or removal of that class when in reality it cares only about the assembly and not the type.
The solution I’ve gone for is to create a static, empty class with a similar name to the assembly in the root of the default namespace of the assembly I wish to reference and use this in the typeof:
using MyCompany.Util;
/* … */
Assembly.GetAssembly(typeof(MyCompanyUtil));
This provides us with a compiler error if the assembly reference is dropped or the assembly is renamed. It will take part in any necessary refactoring operations and is not dependent on irrelevant types.
[1] Examples include Autofac’s MVC and WebApi integration: ContainerBuilder.RegisterControllers & ContainerBuilder.RegisterApiControllers
This tip is applicable if you’re using Entity Framework Code First with dynamic proxies and you have a lot of objects attached to your context, for whatever reason (e.g. within a batch job).
The first thing to note is that if you have a lot of objects attached to your context you want to avoid DetectChanges being called on the context unless absolutely necessary. DetectChanges compares the original to the current state of each object and uses this information for a couple of purposes: Marking entities as added/changed/deleted and fixing up relationships such as bi-directional navigation properties and foreign key columns.
DetectChanges is obviously necessary when SaveChanges is called, but it’s also called whenever one of these operations is called:
DbSet.Find
DbSet.Local
DbSet.Remove
DbSet.Add
DbSet.Attach
DbContext.GetValidationErrors
DbContext.Entry
DbChangeTracker.Entries
DetectChanges calls can be avoided, though, by turning AutoDetectChanges off. Check out this gist:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
… and DetectChanges will not be called. (You could even just turn off automatic detect changes globally, but you would at least need to remember to call DetectChanges manually before SaveChanges was called).
This technique works okay, but it can result in problems if you are relying on two way navigation properties. For example:
var parent = new Parent();
var child = new Child();
parent.Children.Add(child);
using(new NoChangeTracking(context))
{
context.Parents.Add(parent);
}
Debug.Write(child.Parent.Id); // Null reference exception
The child.Parent navigation property will not have been set as we set AutoDetectChangesEnabled to false before we performed the DbSet.Add. We could choose not to turn it off, but that would lead again to the performance issues. We could also explicitly alter both the parent and child navigation properties each time we change one end, but that’s extra code and it’s easy to forget to do.
With dynamic proxies enabled, there’s an easier way. Instead of creating the entities by using the new operator, you create a dynamic proxy by using the DbSet.Create method. This dynamic proxy contains code to intercept alterations to each navigation property and ensure that any reciprocal navigation property on the target object is updated. E.g. when parent.Children.Add(child) is called, the child.Parent property is automatically populated.
Here’s that code again but with the correct proxy initialization:
var parent = context.Parents.Create();
var child = context.Children.Create();
parent.Children.Add(child);
using(new NoChangeTracking(context))
{
context.Parents.Add(parent);
}
Debug.Write(child.Parent.Id); // No null reference!
That’s it. There are many other performance considerations, but combining switching off AutoDetectChangedEnabled with properly using dynamic proxies can get us a long way.
Dependency injection as a pattern provides a lot of useful nudges to get you to produce easily readable and maintainable code. One way in which it does this is to make dependencies explicit so you can see exactly what services a class requires. When using Entity Framework most people are passing the whole context through as a dependency. This post explores an alternative to this approach that provides more clarity of the client code’s use of the context.
We’ve been coding with Entity Framework here at laZook for a while now, using the code first workflow. We’re using Autofac as our dependency injection framework. We inject dependencies into the constructor so that there is one clear place to view a class’s dependencies.
We used to inject the whole DbContext derived class into each type that needed to do anything with the context, e.g. add entities or save changes. This was fairly easy to do, but lead to some confusion. Let’s look at an example program using this technique:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
In this simple example we saw that the Coordinator class was calling upon a couple of worker classes and then persisting any changes. The worker classes were adding the entities to their respective DbSets.
There is a problem with the code, though. If it’s a Friday, no Wotsits will be made. The code will exit due to the exception. If you were looking only at the Coordinator and WotsitGenerator code, you’d be forgiven for thinking that there was a single unit of work and it would not be committed. It looks like the Coordinator is responsible for the SaveChanges call. However, a closer look at the WidgetGenerator reveals a call to SaveChanges after it has created a widget.
It’s a simple example, but where SaveChanges is buried in larger code it can be difficult to work out what is being committed and what isn’t.
What to do about this? One answer is to ensure that SaveChanges is only ever called at the very top level as the last action before the end of the program (in this example) or page request / job execution / button click handler. This works, but is somewhat limiting. What if you want to perform multiple SaveChanges to checkpoint during a long running operation? What if the success or failure of one SaveChanges determines whether or not another unit of work is embarked upon?
We need to make it clear who owns the responsibility for initiating completion of the unit of work.
The solution we’ve come up with is to create an ICompleteUnitOfWork interface that contains the SaveChanges method and have the context implement this interface. This interface is then declared as a dependency for the class that has the responsibility of calling SaveChanges. This allows us to glance at a class constructor and see whether that class owns the responsibility for completing the unit of work. Elsewhere we inject IDbSet<TEntity> instances. This helps us see which entities (or at least which aggregate roots) a class is involved in reading or editing.
Here’s the same code with the new dependencies and the errant SaveChanges in WidgetGenerator removed. We can clearly tell that WidgetGenerator does not call SaveChanges by seeing that it only takes a dependency on IDbSet<Widget>.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
There are some usage patterns of Entity Framework that it doesn’t support too well, but it can be extended to do so. For example, there is no way to get at the DbContext.Entry method for attaching objects and setting their state. You could introduce another interface for this, IManageUnitOfWorkObjectState, but it feels clunky.
Also, injecting the IDbSets is a good first step, but I actually prefer creating some repositories on top of the IDbSets as it better allows for caching and encapsulation of common queries.
I’m interested in any development suggestions or criticisms of the ideas. Let me know here or on Twitter.
Six months ago I joined a team working on a new start-up called laZook. It’s an online distributor, providing brands with a single route to many existing eCommerce channels and some that we’re helping to develop from scratch. We handle the listing, fulfilment and payment for these brands’ products.
We have various features at varying levels of maturity. One of those is a “microstore” that provides a drop down checkout on a website. This is in use already on several blogs and magazine sites. It needs more work but, as it stands, it provides a publisher with the choice of thousands of products to sell on their site. They insert a javascript snippet on their page and their site is now eCommerce enabled. For each purchase they receive commission, much like in a traditional affiliate scheme. One advantage here is that the purchaser is not taken off site for the checkout flow.
We’re currently using PayPal for the payment processing, but they don’t seem to have moved with the times very much and do force us into some poor user experience during checkout. As a result we’ll likely be moving to Stripe at some point soon. They have some modern APIs and can offer a lot more control over the experience.
If you want to see the Microstore in action (even if it is a little rough around the edges), check out the Ex Cellar Wine Club blog. Please do give us some feedback on what you think!
I’m playing around with Quartz.net and adding support for a persistent job store via the ADO.NET Job Store. As per the recommendation, I’m instructing the job store to persist job parameters in plain text rather than BLOBs, using the configuration:
Unfortunately in triggering a simple job, which has no explicit job data map, I receive this error:
JobDataMap values must be Strings when the 'useProperties' property is set. Key of offending value: LAST_MODIFIED_TIME
When looking in the debugger at the JobDataMap object provided to the job I scheduled, there is no LAST_MODIFIED_TIME present. Digging a bit deeper, it seems that there is another job running called FileScanJob, scheduled by the XMLSchedulingDataProcessorPlugin (used to read the job and trigger configuration from an XML file). This job adds the LAST_MODIFIED_TIME entry to its JobDataMap during job execution, which is of type DateTime rather than string.
Why is this raising an exception? That is to do with the implementation of the StdAdoDelegate class. When the quartz.jobStore.useProperties configuration value is set to true it will deliberately fail to write to the job store database any job data that does not use string for both key and value. Despite this restriction, it still uses binary serialization to store the data after this check (in the form of a NameValueCollection).
To come back to the original reason for setting this property, the tutorial advises to use it to avoid serializing complex types and getting into versioning issues after type upgrades. I’d contest that this objective could be simply achieved by supporting all the primitive .NET types whose serialization is not likely to change. The change to StdAdoDelegate would be to perform a type validation of each name/value pair to ensure only simple types are in use in the case of quartz.jobStore.useProperties being true and convert it to System.Collections.Hashtable to allow for changes to the JobDataMap class to be made in Quartz without causing serialization issues.
Another solution could be to have an XmlAdoDelegate that used XML serialization instead of binary serialization.
Maybe I am missing some extra design constraint here. I’ve posted the issue to GitHub (here) to see if my thoughts can be easily shot down.
Edit: This post refers to an older version of the BDD installer. As per JasonH’s comment below, Microsoft has released a new installation package which should hopefully fix the installation bug. It can be found here: http://www.microsoft.com/en-us/download/details.aspx?id=4123
Original post:
Microsoft’s Balanced Data Distributor does not install on top of SQL Server 2008 R2 SP1. It installs fine without SP1 but otherwise comes up with the error:
“The installation is not successful. Check the following prerequisites: 1. Either Integration Services or BIDS has to be installed. 2. The version of these components has to be either SQL Server 2008 SP2 (or future SPs) or SQL Server 2008 R2 (or future SPs)”
In my case all the pre-requisites were met. As per this thread I examined the registry keys it was checking for the version numbers using Process Monitor. I then modified the keys to pretend I was running SQL Server 2008 R2 RTM, ran the BDD installer again (successfully) and modified the keys back to their original values.
Warning: This is not best practice advice! If you do the same as I did and your production system is rendered unusable, this will be entirely your fault. I did this on a throwaway development environment to save time uninstalling SP1 and reinstalling it.
The keys I altered were all in the following path:
HKLM\SOFTWARE\Microsoft\Microsoft SQL Server\100\
The specific keys and the values I set them to were:
DTS\Setup\SP = 0
DTS\Setup\Version = 10.50.1600.1
BIDS\Setup\SP = 0
BIDS\Setup\Version = 10.50.1600.1
When setting up a production system, please ensure you apply the BDD installation before SP1. Don’t use this technique, which will probably render your environment unsupportable!
Changed the “copy local” flag to true on the System.Web.Mvc assembly reference in my web project (as suggested in several pre-Razor articles about BIN deployment of ASP.NET MVC sites)
Ensured I was using a .NET 4 application pool.
Once all this was done and I browsed to the home page of the application, I started to get a series of errors about missing DLLs, starting with System.Web.Helpers. Additional missing assemblies included:
Microsoft.Web.Infrastructure
System.Web.Razor
System.Web.WebPages.Administration
System.Web.WebPages.Deployment
System.Web.WebPages.Razor
The solution compiled fine and ran okay on all the development machines. It turned out that these DLLs are needed for Razor based web pages and are a requirement over and above the standard ASP.NET MVC references.
My initial solution was to locate these assemblies and add references to them from the project with “copy local” as “true”. The assemblies are in the following folder on development machines:
C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies
However, it was, as I say, a series of errors as there were plenty of DLLs that needed to be added. On further searching it turned out that there is support in Visual Studio 2010 SP1 for including all of these dependent assemblies in the deployment without the need to add references to them.
Simply right click on the web project in solution explorer and select “Add Deployable Dependencies” then select “ASP.NET Web Pages with Razor Syntax”. This adds the files to a “_bin_deployableAssemblies” folder in the web project. The contents of this folder are added to the web deployment package. Note that you can also check “ASP.NET MVC” so that you don’t have to remember to set “copy local” to “true” for the System.Web.Mvc assembly.
NOTE: You may have to remove the WebMatrix DLLs, added by this step, from the _bin_deployableAssemblies folder as they have the detrimental effect of redirecting the user to “Account/Logon” on every page request in some instances, regardless of the settings in your own web.config file. See this StackOverflow answer and this Microsoft Connect issue for more information.
The “Add Deployable Dependencies” is covered in more detail by Phil Haack in this blog post:
As many of the comments at the end of his blog point out, this does come across as a hack. One alternative would be to have some kind standalone installer for ASP.NET MVC 3 with Razor and to run this on the target server. Another alternative would be to add another option in the Web Platform Installer. Either of these would have worked great in my situation, but not when trying to deploy in a more locked down environment.
With the advent of Google’s SPDY improvements to the HTTP protocol, could we see an end in sight to the practice of sharding content on web sites?
In the news today (from The Register) Google report a 15% increase in speed when using SPDY to communicate to their web services from Chrome browsers. The SPDY technology seems to establish a single TCP session in which multiple HTTP like requests are managed in an efficient manner. It means, for instance, that multiple requests can be processed concurrently rather than the two (or six in recent browsers) per domain limit.
The practice of content sharding is used, in part, to achieve a similar effect. It allows browsers to believe they’re downloading from different servers and so they can initiate more concurrent connections. Another benefit is in reducing the payload of cookies by using domains the cookies haven’t been set for.
SPDY should take care of all of this for us. And, in fact, using separate domains to serve images, CSS and Javascript will perform worse with SPDY as there will be multiple TCP sessions established. So, assuming this technology becomes more widely available on the server side, we should probably start selectively sharding content on the basis of the capabilities of the user agent.
“Third parties may wish to store information on the equipment of a user, or gain access to information already stored, for a number of purposes, ranging from the legitimate (such as certain types of cookies) to those involving unwarranted intrusion into the private sphere (such as spyware or viruses). It is therefore of paramount importance that users be provided with clear and comprehensive information when engaging in any activity which could result in such storage or gaining of access. The methods of providing information and offering the right to refuse should be as user-friendly as possible. Exceptions to the obligation to provide information and offer the right to refuse should be limited to those situations where the technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user. Where it is technically possible and effective, in accordance with the relevant provisions of Directive 95/46/EC, the user’s consent to processing may be expressed by using the appropriate settings of a browser or other application. The enforcement of these requirements should be made more effective by way of enhanced powers granted to the relevant national authorities.”
It’s not clear to me what “third parties” constitutes, but I assume it does not include the owner of the website the user is visiting. So Google Analytics / Ominiture Site Catalyst would count as a third party.
It’s possible you could interpret the use of an eCommerce site as a user explicitly requesting the service of browsing products for the purpose of purchasing. If you could, then the storage of analytics tags could be interpreted as being necessary to provide an effective browse and purchase experience through process of analysis and improvement. Then it might pass as being the “legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user”. It does feel rather tenuous though.
This is all personal opinion and is not legal advice. Please seek somewhere else to pin the blame if you get taken to court for not asking your customers for permission to store cookies!