Mass Assignment Vulnerability in ASP.NET MVC

By now you may have seen what happened to github last night. In case you didn't, let me bring you up to speed.

In a Ruby on Rails application, you can make a call to update your model directly from request parameters. Once you've loaded an ActiveRecord model into memory, you can poke its values by calling update_attributes and passing in the request parameters. This is bad because sometimes your model might have properties which you don't want to be updated by just anyone. In a rails application, you can protect this by adding attr_accessible to your model and explicitly stating which properties can be updated via mass assignment.

I'm not going to pretend to be a Ruby dev and try to explain this with a Rails example. Github already linked to this fantastic post on the subject regarding Rails here. What I'm here to tell you is that this situation exists in ASP.NET MVC also. If you aren't careful, you too could end up with a visit from Bender in the future.

So, let's see this vulnerability in action on an ASP.NET MVC project.

First, let's set up a model:

public class User {
    public int Id { get; set; }
    public string UserName { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public bool IsAdmin { get; set; }

Then let's scaffold out a controller to edit this user:

public class UserController : Controller {
    IUserRepository _userRepository;
    public UserController(IUserRepository userRepository) {
        _userRepository = userRepository;

    public ActionResult Edit(int id) {
        var user = _userRepository.GetUserById(id);
        return View(user);

    public ActionResult Edit(int id, FormCollection collection) {
        try {
            var user = _userRepository.GetUserById(id);
            return RedirectToAction("Index");
        } catch {
            return View();

Do you see that UpdateModel call in the POST to '/User/Edit'. Pay attention to that. It looks innocent enough, but we'll see in a minute why that is bad.

Next, we scaffold up a view and remove the checkbox that allows us to update the user's Admin status. Once we're done, it looks like this:

That works. We can ship it, right? Nope. Look what happens when we doctor up the URL by adding a query parameter:

I bet you guess what's about to happen now. Here, I'll break execution right at the problematic line so you can watch the carnage:

Okay, you can see the current values to the right. We've loaded user #42 from the database and we're about to update all of his values based on the incoming request. Step to the next line and we see this:

UH OH. That's not good at all. User #42 is now an administrator. All it takes is an industrious user guessing the names of properties on your entities for you to get burned here.

So, what can we do to prevent it? One way would be to change the way we call UpdateModel. You can use the overload which allows you to pass in an array of properties you want to include. That looks like this:


We've just created a whitelist of properties we will allow to be updated. That works, but it's ugly and would become unmanageable for a large entity. Aesthetics aside, using this method isn't secure by default. The developer has to actively do something here to be safe. It should be the other way around, it should be hard to fail and easy to succeed. The Pit of Success is what we want.

So, what can we really do to prevent it? The approach I typically take is to model bind to an object with only the properties I'm willing to accept. After I've validated that the input is well formed, I use AutoMapper to apply that to my entities. There are other ways to achieve what we want too, but I don't have time to enumerate all of the scenarios.

Wrapping up
The point of all of this is that you need to understand exactly what your framework is doing for you. Just because there is a gun available, it doesn't mean you have to shoot it. Remember folks, frameworks don't kill people; developers with frameworks kill people. Stay safe out there friends, it's a crazy world.

Cross posted from Fresh Brewed Code. If you haven't taken a look over there, please take a moment to see what we've been up to.

Convention Based Application Settings via Autofac

Cross posted from Fresh Brewed Code. A little more on that in a soon to be blog post.

A while back I stumbled on a blog post from Chad Myers about a cool feature of FubuCore around handling application configuration. It was one of those "duh, why didn't I think of this years ago" moments. You see, I've inherited an application which has a bunch of configurable options in the app.config. The code had already isolated those settings to classes with virtual members and a lot of calls to ConfigurationManager.AppSettings. It was testable by inheriting the settings class and overriding all of the properties. After seeing Chad's post though, I had to scrap it. POCO Settings -- yes please.

I've started a project to provide this functionality with Autofac. The project is still very new, currently I've only implemented sourcing settings via (app|web).config. I'm hoping to push in support for environment variables and the registry as sources for settings. Additionally I'm going to make the conventions easier to override since you might not like your classes ending in "Settings".

Let's go through some of the code and see how it all works together.

The coordinator for the whole shebang is the Autofac registration source:

public IEnumerable<IComponentRegistration> RegistrationsFor(Service service, Func<Service, IEnumerable<IComponentRegistration>> registrationAccessor) {
    var s = service as IServiceWithType;
    if (s != null && s.ServiceType.IsClass && s.ServiceType.Name.EndsWith("Settings")) {
        yield return RegistrationBuilder.ForDelegate((c, p) => c.Resolve<ISettingsFactory>().Create(s.ServiceType))

This looks for types to resolve whose name ends with "Settings". When it's time to actually create the type, we'll ask the container for a Settings Factory and tell it to create the type being asked for. Then we'll register that in the container as a singleton. Not too much going on there.

The Settings Factory is responsible for creating the settings object, polling each settings provider for values, setting the values and verifying that it's all good before shipping it off:

public class SettingsFactory : ISettingsFactory {
    readonly IEnumerable _providers;

    public SettingsFactory(IEnumerable<ISettingsProvider> providers) {
        _providers = providers;

    public object Create(Type type) {
        var obj = Activator.CreateInstance(type);

        var propertiesMissed = PopulateProperties(type, obj);

        if (propertiesMissed.Any()) {
            var message = String.Format("{0} is missing the following properties in configuration: {1}", type.Name, String.Join(",", propertiesMissed));
            throw new ConfigurationErrorsException(message);

        return obj;

    List<string> PopulateProperties(Type type, object obj) {
        var propertiesMissed = new List<string>();

        var props = type
            .GetProperties(BindingFlags.Public | BindingFlags.Instance)
            .Where(p => p.CanWrite);

        foreach (var prop in props) {
            var value = _providers
                .Select(p => p.ProvideValueFor(type, prop.Name))
                .FirstOrDefault(p => p != null);
            if (value != null)
                prop.Set(obj, value);
        return propertiesMissed;

This class is where most of the work is happening. I'm not totally sold on the implementation, but it will do for now. Right now I'm just taking the first value I get which will prioritize the first registrations.

The last bit is an implementation of a Settings Provider. ConfiguraionManager.AppSettings is just a NameValueCollection, so that makes things stupid easy. The Name/Value Provider looks like this:

public class NameValueSettingsProvider : ISettingsProvider {
    private readonly NameValueCollection _source;

    public NameValueSettingsProvider(NameValueCollection source) {
        _source = source;

    public object ProvideValueFor(Type type, string propertyName) {
        var key = type.Name + "." + propertyName;
        return _source[key];

If we have a FooSettings class with a property Bar then we'd expect to see "FooSettings.Bar" as a key. Really easy, nothing to see here.

The Payoff
Now, let's see it all together. Here's the spec that shows everything in play:

class FooSettings {
    public string Bar { get; set; }

class NeedsSomeSettings {
    public NeedsSomeSettings(FooSettings settings){
        Settings = settings;

    public FooSettings Settings { get; private set; }

public class when_resolving_a_class_that_depends_on_settings {    
    static IContainer _container;
    static NeedsSomeSettings _needsomeSettings;

    Establish context = ()=> {
        var builder = new ContainerBuilder();
        builder.RegisterSource(new SettingsSource());
            .WithParameter("source", new NameValueCollection() {{"FooSettings.Bar", "w00t!"}})
        _container = builder.Build();

    Because of = () => _needsomeSettings = _container.Resolve<NeedsSomeSettings>();

    It should_resolve_type = () => _needsomeSettings.ShouldNotBeNull();
    It should_have_settings = () => _needsomeSettings.Settings.ShouldNotBeNull();
    It should_have_settings_populated = () => _needsomeSettings.Settings.Bar.ShouldEqual("w00t!");

No more mucking around with problematic configuration. Just take a dependency on your POCO settings and off you go. If you forget to configure something you get a friendly error message. Like I said, there are bits of the API I'm not toally sold on and I have few more sources to implement. I also need to add a proper Autofac module to hide a lot of the registration noise.

So far I'm liking this a lot better than having random references to ConfigurationManager.AppSettings littered throughout the code. What do you think? Follow along on github and let me know what you like or hate.

devLink 2011

I was lucky enough to get subbed in last minute to give my CouchDB talk at devLink this past week. This was my first time speaking at a technical conference and it turned out to be a great experience.  I did the same bucket demo that worked so well at the user group and it was just as messy and just as awesome as before.

I'm guessing there was 40-50 people there and I had 30 people fill out response forms. The audience was really engaged and asked a lot of good questions.  The feedback was positive, so that's a bit of a relief. I really hope everyone enjoyed the talk as much as I enjoyed giving it.

The slides from my talk can be downloaded here. The sample app I demoed can be found on github ( and the CouchDB/Wikipedia demo is also up on github (

Bug Recipe: Double Duty View and a Misguided Property Name

At work I've picked up an existing project and have been adding features to it. This has been a whole new thing for me since most of the stuff I've worked on in the past has been projects of my own creation. I recently got caught with my pants down when I let a bug slip over to the QA people. I got bitten because I trusted that a property name did what it said it did and because I didn't thoroughly test the possible scenarios around this particular feature. It's my fault for letting this by, but I just wanted to highlight how important naming really is.

It all started with a simple user story: User would like to track the expiration date of bacon.

Simple enough, I opened up the view model and added my ExpirationDate property.

    public class BaconViewModel {

        public virtual string Name { get; set; }

        public virtual DateTime? ExpirationDate { get; set; }

        // A lot more code here....

        public bool IsNew{
            get { return String.IsNullOrEmpty(Name); }

Up to this point, creating bacon really only had one property to fill out, the Name. What I failed to notice was that little devil of code in the IsNew property. Does a view model not having anything in the name really make it new? I'm getting a bit ahead of myself though, so ponder on that last question while I continue to tell my story.

The IsNew property itself isn't the only ingredient to this bug. The other part stems from the fact that in the project I inherited, the view does double duty for create and edit. Here's the basic view code before I added my feature:

        <% if(Model.IsNew) %>  
            Creating Bacon is the most noble of all tasks.
        <% else %>
            Be careful editing bacon.

    <% using (Html.BeginForm()) {%>
        <%: Html.ValidationSummary(true) %>

            <div class="editor-label">
                <%: Html.LabelFor(model => model.Name) %>
            <div class="editor-field">
                <%: Html.TextBoxFor(model => model.Name) %>
                <%: Html.ValidationMessageFor(model => model.Name) %>
                <input type="submit" value="Sizzle" />

    <% } %>

Of course this is all just a contrived example, but do you see that bit of code in the h2 tag? If we're creating bacon we want to show one thing and if we are editing then we want to show another. I went about my business and added the UI code to capture that bacon expiration date. When I did all of this I introduced a nice little bug.

What happens when I'm creating bacon and give it a name, but fail to give it a good expiration date? The post happens like normal and a partial view model gets filled out and the ModelState has errors. We throw that partially completed view model back to the view along with the errors so that the user can weep at their awfulness. Except this time that IsNew property isn't feeling so new anymore and now the view goes into edit mode and thoroughly confuses the user. Witness the carnage:

Like I said above, this is a contrived example. What really happened in my case was an exception when I tried to grab the value of the nullable property. Hopefully I've made my point though. The two lessons I'm taking from this experience is:

  1. Don't have views that do double duty. If you have parts that are the same, you can use partials to keep things DRY.
  2. Make sure that your property names do what they say they do all of the time and not only just out of coincidence.

No small animals got killed because of this bug obviously, but I did take a nice punch to the ego when QA found a fat bug due to an enhancement I made. That will just be one more thing I know to check when I'm spelunking in other people's code. Happy coding!

Sieve of Eratosthenes in C#

In my personal quest to find out if the programming language really matters, I present to you my first piece of code.  This is my reference implementation of Sieve of Eratosthenes in my preferred language, C#.  For this and all future language implementations, I'll put the code at the top (without comments) and then narrate my experience below.  Now, to the code!

IList<int> findPrimes(int max) {
    var vals = new List<int>((int)(max/(Math.Log(max)-1.08366)));
    var maxSquareRoot = Math.Sqrt(max);
    var eliminated = new System.Collections.BitArray(max + 1);                        


    for (int i = 3; i <= max; i+=2) {
	if (!eliminated[i]) {
	    if (i < maxSquareRoot) {
		for (int j = i * i; j <= max; j+=2*i)
		    eliminated[j] = true;
    return vals;

I started by following the wikipedia definition and then optimized from there.

Algorithm Optimizations
I cut my work in half by treating the special case of '2'. We know that 2 is prime and all even numbers thereafter are not. So, we'll add two immediately and then start looping at 3 only checking odd numbers from there forward.

After we've found a prime, we only need to eliminate numbers from it's square and forward. Let's say we want to find all prime numbers up to 100 and we've just identified 7 as a prime. Per the algorithm, I'll need to eliminate 2*7, 3*7 ,4*7, 5*7, 6*7, 7*7 ,8*7 ,9*7, 10*7 ,11*7, 12*7 ,13*7 and 14*7. None of the even multiples matter (even times an odd is always even) and none of the multiples up to the square of the prime matter since we've already done those multiples in previous loops. So really we only have to eliminate 7*7, 9*7, 11*7 and 13*7. That's a 9 fewer iterations and those savings become more fruitful the deeper you go!

The last optimization is the square root calculation and check. We know from above that we only need to start eliminating beginning at the square of the current prime. Therefore it also makes sense that we can stop even trying once we get past the to square root of the max. This saves a bunch more iterations.

Language Optimizations
Originally I had started by returning an IEnumerable<int>. I wasn't using the list you see above and instead I was using yield return i. I really like that syntax, but once I got to the version (Coming Soon!), I didn't have a direct translation for the yield keyword. I took the lazy route in the VB version and just stuffed it all into a list and returned that. To my surprise it was faster! I went back and changed the C# version above and it performed better. I'm not sure why, but I'm going with it.

What do you think that you get when do a sizeof(bool) in C#? I was surprised to find out that my trusty booleans actually take up a whole byte instead of a single bit. I speculate that there is a performance benefit that all of your types fit into a byte level offset in memory. I was thrilled to find out that we have a BitArray class that is useful for situations above when you need to store a lot of booleans and you need them to only take up a bit in memory. I'm not sure it helped anything, but I feel better knowing I'm using the least amount of memory possible. :)

Despite the fact that I know C# really well, I'm very thrilled that I was able to learn a few things about the language. Also, I'm really happy with the performance of this reference implementation. On my machine (2.66 GHz Core2 Duo and 2 GB of RAM) I can find all of the primes under 1,000,000 in 19ms. I think I've squeezed all I can out of this version. Please let me know if you see something I missed or did wrong and I'll make adjustments.

EDIT: I just added one more optimization that's worth noting. Instead of constructing my list with an empty constructor, I can save a several milliseconds off the larger sets by specifying a start size of the internal array structure behind the list. If I set this size at or slightly above the end count of prime numbers, then I avoid a lot of costly array copying as the array bounds keep getting hit. It turns out that there is quite a bit of math involved in accurately predicting the number of primes underneath a given number. I chose to cheat and just use Legendre's constant with the Prime Number Theorem which is close enough for my purposes. I can now calculate all primes under 1,000,000 in 10ms on my machine. Neat!

Next Page »