Codestock 2012

This past weekend I was given the opportunity to speak about CouchDB at Codestock in Knoxville, TN. This is a talk I've been able to give a few times, but this is the first time I've attempted to record it. I've pulled out a 10 minute clip where we walk through storing a fast food order in a relational database and then storing the same order in a document database. The video is rough because all I had was my pocket camcorder.

CouchDB Bucket Demo, Codestock 2012 from digitalbush on Vimeo.

Also, here are the slides for the whole talk.

The sample code for the note taking app and map/reduce are in this repository. The wikipedia demo can be found in this repository. I'm still trying to get my legs with this whole speaking thing, so your feedback is much appreciated. Codestock was a blast and I hope to go back next year!

Mass Assignment Vulnerability in ASP.NET MVC

By now you may have seen what happened to github last night. In case you didn't, let me bring you up to speed.

In a Ruby on Rails application, you can make a call to update your model directly from request parameters. Once you've loaded an ActiveRecord model into memory, you can poke its values by calling update_attributes and passing in the request parameters. This is bad because sometimes your model might have properties which you don't want to be updated by just anyone. In a rails application, you can protect this by adding attr_accessible to your model and explicitly stating which properties can be updated via mass assignment.

I'm not going to pretend to be a Ruby dev and try to explain this with a Rails example. Github already linked to this fantastic post on the subject regarding Rails here. What I'm here to tell you is that this situation exists in ASP.NET MVC also. If you aren't careful, you too could end up with a visit from Bender in the future.

So, let's see this vulnerability in action on an ASP.NET MVC project.

First, let's set up a model:

public class User {
    public int Id { get; set; }
    public string UserName { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public bool IsAdmin { get; set; }

Then let's scaffold out a controller to edit this user:

public class UserController : Controller {
    IUserRepository _userRepository;
    public UserController(IUserRepository userRepository) {
        _userRepository = userRepository;

    public ActionResult Edit(int id) {
        var user = _userRepository.GetUserById(id);
        return View(user);

    public ActionResult Edit(int id, FormCollection collection) {
        try {
            var user = _userRepository.GetUserById(id);
            return RedirectToAction("Index");
        } catch {
            return View();

Do you see that UpdateModel call in the POST to '/User/Edit'. Pay attention to that. It looks innocent enough, but we'll see in a minute why that is bad.

Next, we scaffold up a view and remove the checkbox that allows us to update the user's Admin status. Once we're done, it looks like this:

That works. We can ship it, right? Nope. Look what happens when we doctor up the URL by adding a query parameter:

I bet you guess what's about to happen now. Here, I'll break execution right at the problematic line so you can watch the carnage:

Okay, you can see the current values to the right. We've loaded user #42 from the database and we're about to update all of his values based on the incoming request. Step to the next line and we see this:

UH OH. That's not good at all. User #42 is now an administrator. All it takes is an industrious user guessing the names of properties on your entities for you to get burned here.

So, what can we do to prevent it? One way would be to change the way we call UpdateModel. You can use the overload which allows you to pass in an array of properties you want to include. That looks like this:


We've just created a whitelist of properties we will allow to be updated. That works, but it's ugly and would become unmanageable for a large entity. Aesthetics aside, using this method isn't secure by default. The developer has to actively do something here to be safe. It should be the other way around, it should be hard to fail and easy to succeed. The Pit of Success is what we want.

So, what can we really do to prevent it? The approach I typically take is to model bind to an object with only the properties I'm willing to accept. After I've validated that the input is well formed, I use AutoMapper to apply that to my entities. There are other ways to achieve what we want too, but I don't have time to enumerate all of the scenarios.

Wrapping up
The point of all of this is that you need to understand exactly what your framework is doing for you. Just because there is a gun available, it doesn't mean you have to shoot it. Remember folks, frameworks don't kill people; developers with frameworks kill people. Stay safe out there friends, it's a crazy world.

Cross posted from Fresh Brewed Code. If you haven't taken a look over there, please take a moment to see what we've been up to.

Getting Started with Box2D Physics

The past few days I've been messing around with the Box2D physics engine. For someone who spends his days buried in business applications, this has been a fun bit of learning. Box2D has been ported to a ton of languages and I found a nice port to javascript called box2dweb.

First, let's look at a simple demo:

Click here for full jsFiddle

The first thing you'll need to do is set up a world and a loop to update it. The basics look like this:

var world = new b2World(
   new b2Vec2(0, 10), //gravity vector

    world.Step(1 / 60, 10, 10);

We just declared a world with some gravity. In the example above, we're applying gravity down, but you can have it pushing any direction you'd like. Next we set up an interval to run 60 times per second. Inside of that we tell the world to step 1/60th of a second while specifying the velocity and position iterations. For the velocity and positon iterations, the values can be altered to meet your needs. Lower will yield better performance, higher will yield better accuracy.

So, now you have a world with nothing in it. What fun is that? We'll need to add some stuff and start crashing it into each other.

There are two type of objects you can create. Static objects, like the triangle above, are fixed in the space. They are not affected by gravity or other objects. Dynamic objects are the fun ones that you get to move around. Our circles above are created and then nudged slightly to make them fall on either side of the triangle.


var fixDef = new b2FixtureDef;
fixDef.shape = new b2PolygonShape;
fixDef.density = 1.0;
fixDef.friction = 0.5;
fixDef.restitution = .5;
    new b2Vec2(-1, 0),
    new b2Vec2(0, -1),
    new b2Vec2(1, 0)],3

var bodyDef = new b2BodyDef;
bodyDef.type = b2Body.b2_staticBody;    
bodyDef.position.Set(7, 7);


//Same fixture density, friction and restitution from above.
fixDef.shape = new b2CircleShape(.5);
var body=world.CreateBody(bodyDef);

I mentioned above that I'm nudging the circles. In order to push the shapes, we can use the ApplyImpulse method. It needs two parameters, a vector defining the force to be applied and a point that it should be applied to. Take a moment to go poke around in the fiddle and change the vector for the impulse. You can do some fun stuff like punch them straight up in the air. Go ahead, I'll wait.

There is one last bit you'll need to get your own samples going. All of the code we've done above describes the objects and their interactions. We still need a way to visualize it though. Luckily box2dweb has a debug drawing mode to render the objects on a canvas element. Here's what you need to set it up:

var debugDraw = new b2DebugDraw();

With that, all that is left is to call world.DrawDebugData() right after you step. Now we can see our demolition derby in action!

I think that covers the basics. There is a lot of fun things you can do with the sample. Try changing the restitution (bounciness), the force of gravity, the direction of gravity, which direction you "nudge" the falling circles... heck, just start changing stuff and watch. It's way more fun than it should be.

Knockout.js Observable Extensions

This started out as a post about how to implement the new extender feature in Knockout.js 2.0. I wanted to see how well that would improve the experience of a money observable I created several months back. Once I had it implemented though, I was a bit disappointed. My extender doesn't have any arguments, but the knockout observable extend call only accepts a hash in the form of {extenderName:extenderOptions}. I ended up with a call that looked like this: var cash=ko.observable(5.23).extend({money:null});

That didn't leave a very good taste in my mouth. So, I pulled down knockout and set out to change the way the extenders were implemented. I've grown fond of how jQuery chaining worked, so why not bring that to Knockout's observables? Luckily Ryan Niemeyer was there to save me from myself and pointed out that I could just extend ko.subscribable.fn to achieve the desired effect.

I'm happy with the outcome. Let's explore the strategy a bit. Before I get in too deep, here's the end result:

Click here for full jsFiddle

You may be asking yourself, "What's so great about this?" This is basically the same as my previous sample with one exception. This implementation attaches directly to the subscribable type that KO provides. You might not have seen this unless you've spent some time digging around the knockout.js source. This type serves as a base for observables, obervableArrays and dependentObservables computed observables.

Here's the code that provides the money formatting:

    var format = function(value) {
        toks = value.toFixed(2).replace('-', '').split('.');
        var display = '$' + $.map(toks[0].split('').reverse(), function(elm, i) {
            return [(i % 3 === 0 && i > 0 ? ',' : ''), elm];
        }).reverse().join('') + '.' + toks[1];

        return value < 0 ? '(' + display + ')' : display;
    }; = function() {
        var target = this;
        var writeTarget = function(value) {
            target(parseFloat(value.replace(/[^0-9.-]/g, '')));
        var result = ko.computed({
            read: function() {
                return target();
            write: writeTarget

        result.formatted = ko.computed({
            read: function() {
                return format(target());
            write: writeTarget

        return result;

Line 11 is where we start. By extending the subscribable.fn object we are adding a property to each and every subscriabable object that KO creates for us. This will give us the ability to chain observables to one another as long as we return an observable from our method(line 32).

On line 12 we see that 'this' references the observable we're extending. I like this because there are no special method signatures we need to implement. Here I'm just grabbing my own reference of this as a variable named target.

Line 18 is where this starts to get a little interesting. I'm creating a writable computed observable that will return the value from the base observable when read. When it gets written to, it will sanitize the input and then write that to the base observable. This will be the observable we return for public consumption(line 32).

Line 25 is where the formatting comes into play. To the observable we're returning we'll add another observable as a property named 'formatted'. This is what we'll bind to whenever we want to see a pretty version of our value. This is another read/write computed observable like we did above. When the property is read from, it will pass the base observable's value through a formatter. The write is the same as the base observable.

Use It

var viewModel = {
    Cash: ko.observable(-1234.56).money(),
    Check: ko.observable(2000).money(),
    showJSON: function() {

viewModel.Total = ko.computed(function() {
    return this.Cash() + this.Check();
}, viewModel).money();

On lines 2,3, and 11 you can see where I've used the observable extension I created above. The cool thing about this technique is that we don't care what kind of observable we're extending, it just works.

The showJSON function on line 4 is what gets fired when we click the "Show View Model JSON" button on the example above. Click this and you will see that our json serialization is clean. This is because the base observable we return is the unformatted (no dollar signs, commas, or parenthesis) version.

The Payoff

<div class='ui-widget-content'>
        <label>How much in Cash?</label>
        <input data-bind="value:Cash.formatted,css:{negative:Cash()<0}" />
        <label>How much in Checks?</label>
        <input data-bind="value:Check.formatted,css:{negative:Check()<0}" />
        <span data-bind="text:Total.formatted,css:{negative:Total()<0}" />
        <button data-bind="click:showJSON">Show View Model JSON

Lines 4 and 8 we've bound the input's value to the formatted version of the extended observable. Line 12 has the text of a span bound to the formatted version of the computed observable.

I've rehashed this example 3 times now, but I'm happiest with this implementation. Extending *.fn.* isn't documented anywhere I saw, but maybe it should be. 😉 Maybe I should RTFM, it's clearly documented here. This chaining technique will be familiar to anyone who has used jQuery. What do you think about this technique?

Cross posted from Fresh Brewed Code. If you haven't taken a look over there, please take a moment to see what we've been up to.

Manage Your Dependencies with Rake and NuGet

Earlier I blogged about how to perform some basic build tasks in your .NET project with Rake and Albacore. There was one bit about managing dependencies I left off though because I thought it warranted its own post. For the projects I've been working on lately, we've managed to keep our source repository light and nimble by not checking in binaries for all of the dependencies.

NuGet 1.6 came out this week and this functionality is baked in. You can check out the NuGet way in the documentation. The bummer of this is that you have to enable "Package Restore" for each project in your solution. You also now have multiple packages.config to maintain per project. Yes, you can manage it all though the GUI or the package manager console for your projects, but I want it all in one place. I also like not having to do anything on a per project basis other than standard references.

After several iterations on what Derek Greer started, I've ended up with the solution below. Dependencies are declared in the same packages.config format that nuget uses, so you can take something you've already created and centralize it. We have one build step to refresh our dependencies and it looks like this:

require 'rexml/document'
TOOLS_PATH = File.expand_path("tools")
LIB_PATH = File.expand_path("lib")

	#Your internal repo can go here

task :dependencies do
	file ="packages.config")
	doc =
	doc.elements.each("packages/package") do |elm|

		versionInfo="#{packagePath}/" if File.exists?(versionInfo)
		packageExists =
		if(!(version or packageExists) or currentVersion!= version) then
			feedsArg ={ |x| "-Source " + x }.join (' ')
			versionArg = "-Version #{version}" if version
			sh "\"#{TOOLS_PATH}/nuget/nuget.exe\" Install #{package} #{versionArg} -o \"#{LIB_PATH}\" #{feedsArg} -ExcludeVersion" do |ok,results|,'w'){|f| f.write(version)} if ok

There's a little bit of code there, but we're getting some good benefits from this one task.

Control over where our dependencies go.
I'm not a big fan of the packages/ folder that nuget uses by default. You may be able to change this in the GUI somewhere, but I haven't seen it yet. Yes, I'm aware that this is trivial, but I got used to storing my dependencies in lib/ and I'm okay with keeping that. 🙂 Every team has their own conventions they like to follow and it's nice to not have to change those just because you want to adopt a new tool.

No weird version number suffixes on our folders.
The default convention nuget uses is to store packages under a folder named {name}.{version}. That's cool until you need to update your dependency to a new version. When you do, you (or your tooling) will have to update the reference paths in all of your *.csproj files to accomodate the new path. I would prefer to store it in a folder with just the name of the package. Keep in mind, this removes the ability to run multiple versions of the same library for different projects within a solution. This hasn't come up on my projects yet though.

No need to keep tabs on what dependencies our dependency has.
I'm hoping this issue will change one day. As it stands right now (NuGet 1.6), if I have a single entry in my packages.config like so: <package id="NHibernate" version=""/> then calling $> nuget.exe install packages.config will not get NHibernate's dependency 'Iesi.Collections'. It turns out though, calling nuget like this: $> nuget.exe install NHibernate -Version will get that dependency for us, so that's exactly how our rake script does it.

I feel like the ruby syntax reads fairly easy even if you aren't familiar with the language. Still though, I think it would be beneficial to add a little commentary.

Line 5 is where we define our source(s) for nuget packages. At work we're using a file share to cache packages and then falling back to the default source when needed.

Lines 11 and 12 are where we load up the packages.config xml file using the XML parser that ships with a default Ruby install. From my reading, there are better gems to accomplish this faster, but this is a really tiny XML file we're dealing with.

Line 13 selects each package node and iterates over it. The next two lines just pick out the id and version attributes into variables. On lines 19 and 20 we read in the version file if it exists and also check if the package directory exists. We use all of that on line 22 to see if we need to restore this package.

If we're all systems go for NuGet launch, then line 23 turns the array of feeds from line 5 into '-Source' arguments for nuget.exe. Line 24 creates a version argument for nuget.exe if we have one. Finally, line 25 shells out to nuget.exe and assembles all of the command line arguments it needs to do the job. When we get our package, we poke(line 26) a file to track the version we've downloaded for future runs.

Wrapping Up
That's it. I almost didn't write this post since NuGet 1.6 supports this scenario out of the box. I still feel like it's worthwhile to have this as part of our rakefile if for no other reason than to manage my packages from a single place. What do you think? Please let me know if you see anywhere I could improve the process.

Cross posted from Fresh Brewed Code. If you haven't taken a look over there, please take a moment to see what we've been up to.

« Previous PageNext Page »