adrift in the sea of experience

Wednesday, December 1, 2010

bisect your source code history to find the problematic revision

I am working on a little application to edit Markdown documents with instant preview. (For those of you who aren't into building things from source, here's a windows installer to put a downmarker shortcut in your start menu.) I like to use both windows and linux so I try to make sure that it runs on both Microsoft.NET and Mono.

When I last tested my application on Mono, I discovered a very annoying problem: each time I tried to type something in a markdown document, a new nautilus file browser window would be launched! I had absolutely no idea why this would happen or where to start debugging. To make matters worse, I hadn't tested on Mono for about 40 revisions so there were a lot of possible changes that might have introduced the problem.

I have the source code history in a mercurial repository, so I decided to give the "bisect" feature a try. I started by marking the latest revision as "known bad":

hg bisect --bad

I also remembered making some mono-specific bug fixes, and the problem didn't exist at that point. So I grepped the output of hg log for commit messages containing the word "mono" and marked the revision as good:

hg bisect --good 627d6

At this point the bisect command has a revision range to work with. It will automatically update your working copy to a revision right in the middle of that range. You just have to rebuild your application, verify whether the problem is still there, and mark the revision as good or bad. (It is also possible to do this test automatically with a script.) The bisect command will then automatically cut the revision range to investigate in half again, and select yet another revision in the middle, etcetera.

Sure enough, after about 5 or 6 tests I got this message:

The first bad revision is:
changeset:   42:c4bbabe79dde
user:        wcoenen
date:        Sun Nov 07 23:31:32 2010 +0100
summary:     Links are now handled by opening .md or by external application.

Aha! Looks like I added some code to handle link clicks by passing the URL to the OS (to open it with a suitable external application). And obviously this code is now being triggered erroneously on mono whenever the document is updated. Thank you bisect!

Wednesday, November 24, 2010

git error: error setting certificate verify locations

I was trying to clone a repository from github when I ran into this error:
Cloning into docu...
error: error setting certificate verify locations:
  CAfile: \bin\curl-ca-bundle.crt
  CApath: none
 while accessing

If you google around, many people "solve" this by disabling the SSL certificate check entirely. Obviously there is a reason for that check, so disabling it is not quite the right solution! It turns out that there is mistake in the gitconfig file that comes with msgit setup (I have Git- installed). The right fix is to change the sslCAinfo setting in "c:\program files\git\etc\gitconf" to this:

sslCAinfo = c:\\program files\\git\\bin\\curl-ca-bundle.crt

Tuesday, November 23, 2010

Mercurial and subversion subrepos

It is not yet mentioned in the Mercurial book, but Mercurial has a subrepos feature to pull code from other repositories into your project as a nested repository. It is a bit similar to SVN externals and git submodules.

Better yet, it also works with subversion! There are still some bugs to be worked out though: you better not move your SVN subrepo around in your mercurial repository. For all the ugly details, see my bug report.

Friday, November 19, 2010

An equivalent of .bashrc for the windows cmd.exe shell

Tired of the "cd" command in windows refusing to navigate to folders on other drives? Put this in the [HKEY_CURRENT_USER\Software\Microsoft\Command Processor] AutoRun key:


This will cause the given script to be executed each time a cmd.exe window is opened. I found out about that thanks to this post.

Now put this in c:\cmdauto.cmd

@echo off
doskey cd=pushd $*

The doskey command creates aliases, here overriding the behavior of "cd".

Wednesday, June 23, 2010

ZFS-Fuse reliability report

A few weeks ago, I got this question in the comments on my last NAS post:
Since you've been using ZFS-fuse for a time now, can you report here or blog about its stability? Has the daemon crashed on you, while taking a snapshot, or while "scrub"-ing?

How much data have you passed in?

First, note that the version of ZFS-FUSE which I am using is not a particular release, but this revision from the official git repository. I have not bothered to update my ZFS binaries since January, because I didn't encounter any problems.

The "zfs list" output for my pool shows that I am using 352 GiB. The pool consists of two 465 GiB disks in a mirror setup.
nas-pool           352G   105G    24K  /nas-pool
nas-pool/archive   118G   105G  71.1G  /nas-pool/archive
nas-pool/bulk      234G   105G   228G  /nas-pool/bulk

The pool is scrubbed by a cron script each sunday night. I have not seen any crashes/hangs during scrubbing. The scrubs have not detected any errors so far.

"archive" is snapshotted every night. "bulk" is snapshotted" only on sunday nights. The pool contains more than a hundred snapshots. Again, I have not seen any crashes or hangs.

Then there is read/write activity: I use it to automatically archive my emails from Gmail (nightly), sync with my dropbox folder (continuously), and to store my mp3s, photos, videos, downloads and backups when I need to. When reading or writing NAS files, I am usually working from my laptop over WiFi which is a speed bottleneck, so this probably doesn't really stress the file system. On the other hand, I do regularly make a full off-line backup of the "archive" filesystem by hooking up an external USB disk to the NAS. I haven't seen ZFS crash during any of this.

From this, I conclude that ZFS-Fuse is pretty stable.

Saturday, May 29, 2010

Exponential growth has limits

I saw this comment in the thread of a slashdot post about Wall Street today:

But everything thus far shows us that perpetual growth is possible. Technology is a wonderful thing - each year we're able to do more with less.

That's not to say that a lot of what goes on in the market isn't pure, unadulterated bullshit, but real, honest-to-goodness "growth" won't stop until technology does.

This is a textbook example of the "cornucopian" view on economics. The implicit assumption here is that there are no limits to technological improvements, or that those limits are extremely far in our future. I think that this assumption is flat out wrong and dangerous. It is these sorts of ideas which have lead large parts of the global economy to eerily resemble Ponzi Schemes.

I wrote the following post in reply:

There are limits to exponential growth. (And make no mistake, growth expressed as a fixed percentage per year is exponential). Technology can push the limits closer to what the laws of physics allow, but technology cannot change the laws of physics.

Let's look at some numbers to drive the point home. Our global energy consumption in 2008 was estimated to be 474 exajoules.

The total energy received by the earth from the sun during a year is about 5 million exajoules, a fraction of which reaches the surface. 5 million is much more than 474. But at a seemingly modest 2% per year growth rate (as it was between 1980 and 2006), our energy consumption will match those 5 million exajoules in less than 500 years!

Think about that: if energy consumption growth continues at the current pace, then in 500 years we'll either be using ALL solar energy received by the earth (leaving none for the biosphere), or we'll have figured out some magic technology to produce 5 million exajoules of energy per year. Assuming the magic technology, where are we going to get rid of all that extra heat? It would effectively be like having a second sun on earth, cooking us in place.

Granted, you did say "do more with less". So lets say energy consumption will stay constant in the future, and instead we'll derive 2% more "value" from the same energy each year. Now you run into a new problem. No matter how you define "value", you run into physical limits. If you define value as "amount of mass lifted out of the earth's gravity field", then the hard efficiency limit is a minimum of 60 megajoules per kg. If you define value as "amount of computation", then again there are limits given by the laws of physics.

Exponential growth is counterintuitive. No matter how far you push the limits (e.g. by colonizing the entire galaxy or inventing game-changing technology), exponential growth will hit its limits much faster than you think. We're talking about growth with a fixed doubling period here.

Finally, I'd argue that we are already experiencing the end of exponential growth today. After decades of growth, in 2004 global oil production reached a plateau []. It's not a coincidence that we experienced a major financial crash and recession soon after that. The era of "perpetual growth" is over. The next era will be that of the "zero-sum game" at best.

On a related note, after writing the above post I stumbled on this excellent series of videos of a lecture by Dr. Albert A. Bartlett where he shows with some very simple calculations and examples that "steady growth" is in fact always unsustainable. We have to find a way to get our civilization to work with 0% growth. The alternative is total collapse, probably within decades.

Saturday, April 3, 2010

Building a dependency injection container in 30 lines

After reading this article by Josh Smith explaining the Service Locator pattern, I commented that direct dependency injection might be a better idea. Mark Seemann has a good write up about why Service Locator is an anti-Pattern, and I'm inclined to agree with him.

When Josh replied that injecting dependencies with constructor arguments doesn't really solve the problem of dependency creation, I was tempted to reply by enumerating all the .NET dependency injection frameworks that exist for exactly this purpose.

But then I realized that Josh had demonstrated the Service Locator pattern without using any framework. Instead, his article has a ServiceContainer class of about 30 lines. Service Locator has many disadvantages, but apparently it can be quite lightweight!

This then lead me to wonder if the same could be done for creating a dependency injection framework. Ayende has actually already demonstrated that you can create a primitive one in 15 lines, but I was thinking of something that could be used with a more friendly Ninject-esque syntax like this:

var container = new Container();
container.Bind<App, App>();
container.Bind<IFoo, Foo>();
container.Bind<IBar, Bar>();

var app = container.Pull<App>();

As it turns out, implementing a bare bones container which can do that is really not that hard. It also has the advantage that it takes care of the dependencies of the dependencies etcetera, something which Josh's sample doesn't seem to do. (Disclaimer: I didn't really test this for anything but the plain vanilla use case, no error conditions were considered.)

public class Container
    private readonly Dictionary<Type, Type> contractToClassMap = new Dictionary<Type, Type>();
    private readonly Dictionary<Type, object> contractToInstanceMap = new Dictionary<Type, object>();

    public void Bind<TContract, TClass>() where TClass : class, TContract
        this.contractToClassMap[typeof(TContract)] = typeof(TClass);

    public TContract Pull<TContract>()
        return (TContract)Pull(typeof(TContract));

    public object Pull(Type contract)
        object instance;
        this.contractToInstanceMap.TryGetValue(contract, out instance);
        if (instance == null)
            var constructor = contractToClassMap[contract].GetConstructors()[0];
            var args = 
                from parameter in constructor.GetParameters() 
                select Pull(parameter.ParameterType);
            instance = constructor.Invoke(args.ToArray());
            this.contractToInstanceMap[contract] = instance;
        return instance;

Thursday, April 1, 2010

Adding some compiler verification to PropertyChanged events

I've been playing around with Windows Presentation Foundation and the Model-View-ViewModel pattern in the past week. Josh Smith's introductory article does a good job explaining how the MVVM pattern can be used used in WPF.

One aspect of the pattern is that your view model needs to provide change notifications, for example by implementing the INotifyPropertyChanged interface. To avoid repeating the same code, it might be a good idea to implement this in a shared base class:

public abstract class ViewModelBase : INotifyPropertyChanged
      protected virtual void OnPropertyChanged(string propertyName)
         if (PropertyChanged != null)
            PropertyChanged(this, new PropertyChangedEventArgs(propertyName));

      public event PropertyChangedEventHandler PropertyChanged;

There is a problem with this though: the caller of OnPropertyChanged can pass any string. To constrain this to strings that match a property name, we can use reflection to check whether such a property indeed exists. The sample in Josh Smith's introductory article takes this approach. That way, passing an invalid string will at least raise an exception at run-time.

However, we can still do one better and catch such errors at compile time, which is a huge advantage during refactorings. The solution involves two advanced C# tricks. The first trick makes use of the fact that the C# compiler can convert lambdas into expression trees, which can then be inspected to extract the name of a property:

Foo foo = new Foo();
    string propertyName = GetPropertyNameFromExpression<Foo,int>(x => x.Bar);
    Debug.Assert(propertyName == "Bar");

The GetPropertyNameFromExpression method is implemented like this:

private string GetPropertyNameFromExpression<TClass,TProperty>(
         Expression<Func<TClass, TProperty>> expression)
         var memberExpression = expression.Body as MemberExpression;
         if (memberExpression == null)
            throw new ArgumentException(String.Format(
               "'{0}' is not a member expression", expression.Body));
         return memberExpression.Member.Name;

But how do we get that TClass type parameter in our base class? That's our second trick: as it turns out, it is possible for a base class to have a type parameter representing its derived classes. The resulting base class looks like this:

public abstract class ViewModelBase<TDerived> : INotifyPropertyChanged
       where TDerived : ViewModelBase<TDerived>

        private string GetPropertyNameFromExpression<TPropertyType>(
           Expression<Func<TDerived, TPropertyType>> expression)
            var memberExpression = expression.Body as MemberExpression;
            if (memberExpression == null)
                throw new ArgumentException(String.Format(
                    "'{0}' is not a member expression", expression.Body));
            return memberExpression.Member.Name;

        /// <summary>
        /// Triggers the <see cref="PropertyChanged"/> event for the property used in
        /// <paramref name="expression"/>
        /// </summary>
        /// <param name="expression">
        /// A simple member expression which uses the property to trigger the event for, e.g.
        /// <c>x => x.Foo</c> will raise the event for the property "Foo".
        /// </param>
        protected void OnPropertyChanged<TPropertyType>(
            Expression<Func<TDerived,TPropertyType>> expression)
            string name = GetPropertyNameFromExpression(expression);
            if (PropertyChanged != null)
                PropertyChanged(this, new PropertyChangedEventArgs(name));

        public event PropertyChangedEventHandler PropertyChanged;

The viewmodel implementations then look like this:

public class FooViewModel : ViewModelBase<FooViewModel>
      private int bar;

      public int Bar
   = value;
            OnPropertyChanged(x => x.Bar);

It is now much harder to trigger a PropertyChanged event with the wrong property name, as the compiler will verify that the property actually exists. Better yet, if we do a "rename" refactoring of our properties then the IDE will also do the rename in the OnPropertyChanged lambdas. Hurray! :-)

Sunday, March 28, 2010

The General Public License doesn't always force you to make your code available

I just answered this question on stackoverflow asking for "ways around the GPL". While the motives of the poster seem questionable, this did prompt me to explain a subtlety of the General Public License which he may not have understood: the GPL does not automatically force you to release your code as soon as you use GPL'ed code.

The GPL only requires that you make the code available to anyone you distribute the software to. This is not exactly the same as releasing it to the whole world for free. From the GPL FAQ:

Does the GPL require that source code of modified versions be posted to the public?

The GPL does not require you to release your modified version, or any part of it. You are free to make modifications and use them privately, without ever releasing them. This applies to organizations (including companies), too; an organization can make a modified version and use it
internally without ever releasing it outside the organization.

But if you release the modified version to the public in some way, the GPL requires you to make the modified source code available to the program'susers, under the GPL.

Thus, the GPL gives permission to release the modified program in certain ways, and not in other ways; but the decision of whether to release it is up to you.

So if you use GPL code for building an internal tool which you will never distribute to third parties, then there is no requirement to distribute the source to anyone.

Interestingly enough, the above subtlety also applies if you only make the software available as a web application hosted on your own web servers. Since you aren't technically distributing the application to the users, you don't have to give them the code. The Affero General Public License (AGPL) was designed specifically to close this loophole.

Monday, February 22, 2010

Creating a web feed for a file system directory

I just wrote a little python script which generates a web feed for a folder with text files. When run, the script detects the 10 last added/changed files and outputs them as entries in a feed file. This makes it possible to easily create a web feed with just some shell scripting.

To use the script:
  • download it here and make it executable
  • edit the configuration values at the start of the script
  • execute the script regularly, e.g. from a cron job
  • if not generating the file directly on a web server, publish the feed file on the web (e.g. with scp or just write it to your public dropbox folder)
I have used the W3C feed validation service to check that the resulting file is a valid atom syndication feed. However, the configuration values are important for the validation so check that the generated feed still validates after configuring.

Tuesday, February 16, 2010

Building a NAS, part 7: ZFS snapshots, scrubbing and error reporting

Setting up a ZFS pool with redundancy can only protect you against disk failures. To protect yourself against accidental deletions or modifications of files, you can use snapshots. You also need to explicitly start a ZFS data scrub at regular intervals to make sure that any checksum failures are repaired. Such things are best automated, but you might still want to receive reports so that you can keep an eye on things.

Automated snapshots

Setting up automated snapshots for ZFS-FUSE on debian is surprisingly easy. Drop this script in /etc/cron.daily/:
zfs snapshot mypool/myfilesystem@`date +%Y.%m.%d:%H.%M`.auto
This will automatically create daily snapshots with a name like Note that this will complicate things if you need to delete stuff to make room. As long as there is a snapshot referencing a file, it will continue to take space in the pool. Daily snapshots work best for a grow-only archive where you rarely need to delete something.

A word of warning: the scripts in /etc/cron.daily are only executed if they are executable and have no dots in their name. See man run-parts for more details. Test with /etc/cron.hourly to verify that everything works, then move the script to /etc/cron.daily.

Automated scrubbing

A ZFS pool can repair its checksum errors (if there is redundant storage) while still remaining on-line. This is called a scrub. The recommended scrub interval for consumer grade disks is one week. Drop this script in /etc/cron.weekly:
zpool scrub mypool

Web feed reporting

A report of the scrub progress or the results of the last scrub can be shown with the zpool status command. A list of all file systems and snapshots (including some useful statistics) can be shown with the zfs list -t all command. To automate the reporting, I use this script in a cron job:

reportfile=/root/poolreports/`date +%Y.%m.%d:%H.%M`.txt
date > ${reportfile}
zpool status nas-pool 2>&1 >> ${reportfile}
zfs list -t all 2>&1 >> ${reportfile}
I then generate a web feed for the /root/poolreports/ folder as I explained in my previous post and follow the feed with google reader.

Thursday, February 11, 2010

Debugging: Why is the Tick event of my WinForms timer no longer raised

I was debugging an issue at work today were a WinForms Timer object was apparently no longer firing Tick events as it was supposed to.

It turned out that the timer was inadvertently being used by a BackgroundWorker thread. Like most classes, winforms timers are not thread safe so any behavior guarantees are out the window as soon as you start accessing them from different threads without synchronization measures.

Worse, winforms timers interact with the main application thread directly so in this case it is not possible to put such synchronization measures in place. I like to call such classes thread-hostile. Another sure way to create thread-hostile code is to use global variables; we have our fair share of such problems in our legacy code base.

The following sample reproduces the timer problem by accessing a timer from a ThreadPool worker thread; the timer will only be fired once instead of indefinitely as you might expect:

public partial class Form1 : Form
      private System.Windows.Forms.Timer fTimer;

      public Form1()
         fTimer = new System.Windows.Forms.Timer();
         fTimer.Interval = 1000;
         fTimer.Tick += HandleTimerTick;

      private void HandleTimerTick(object sender, EventArgs args)
         // sabotage timer by stopping/starting it from another thread
            } );

         MessageBox.Show("Timer tick");

In our case, the worker thread touched the timer in a much more indirect way: the background task was using a service which leaked side effects into the rest of the system via events, resulting in inadvertent multi-threaded access all over the place.

Conclusion: if you are going to do multi-threading, make sure threads are well-isolated and only communicate with the rest of the system via well defined synchronization points.

Wishlist item: wouldn't it be nice if you had to explicitly mark methods before they could be used by multiple threads? The C# compiler could then generate optional checks that make your code fail fast when there is an accidental "threading leak". It wouldn't surprise me if the language will actually grow such a debugging feature in the future; multi-threaded .NET programming is on the rise yet still wildly dangerous.

Thursday, February 4, 2010

OpenID: great standard, many poor implementations

I was browsing slashdot the other day, and noticed an interesting story that I wanted to upvote, which requires logging in. Interestingly, there's an openid option:

OpenID is a standard that allows you to reuse a single identity on different websites (or any other service that requires an identity). Chances are you already have an OpenID. For example, if you have a google account, then you can use the URL as an OpenID. There are many more OpenID providers like Yahoo, MyOpenID, AOL, LiveJournal, Wordpress, Blogger, Versign, etcetera.

Currently I have 130 user accounts on the web that I have bothered to keep track of. The idea of OpenID is that you no longer have to create hundreds of accounts, each with their own user name and password (or worse, the same password). You just enter your OpenID, and the OpenID provider takes care of authenticating you.

Stackoverflow gets it right

For an example of OpenID done right, try the stackoverflow login page. See how easy that was? No passwords, no confirmation mails, just reuse your existing identity by clicking the icon of your identity provider. As Steve Jobs would say, isn't that wonderful?

Slashdot gets it wrong

Unfortunately, when you log in with your OpenID in slashdot you are greeted by this:

In other words, you still have to create a username and password specifically for slashdot. Worse, even if you do that you still cannot login with just your OpenID. What gives?

Facebook gets it wrong

You can go into your facebook Settings - Account Settings - Linked Accounts - Change - Add Account and enter an OpenID there. If you then log out and try to log back in to test it, there is no OpenID option on the login page. WTF? On a hunch, I then just retyped the facebook URL in my browser address bar and it looked like I was already logged in.

A little more investigation shows that facebook relies on a cookie that links your browser to your OpenID, and tries to log you in transparently with that information. Since I have configured my browser to only keep cookies between browser sessions for a small white-list of websites, this doesn't work for me at all. Even if I add facebook to the white-list, I won't be able to use my OpenID to log in on other computers. FAIL. I guess just putting a "Log in with OpenID" button on the login page would have been too easy.

Dealing with lack of OpenID support

OpenID support is growing, but the majority of web sites still don't support it or implement their support very poorly. Others only support OpenID as an identity provider and refuse to accept identities from other providers.

To deal with all these sites that still require passwords, most people reuse the same password over and over again. This is terrible security. Any of the sites that you use could have a malicious admin that may like to sell username/password combos to the highest bidder. Or maybe the website admin isn't malicous, but the user account database might store passwords unhashed and could be compromised.

Personally I use the cross-platform KeePass application to maintain a personal encrypted database of passwords. The database is protected by a single master password (or passphrase). I put mine in my dropbox folder, so I have access to my passwords on each PC I use. Even better, if you stick to version 1.x the database is compatible with KeePassMobile so you can carry your passwords with you on your phone.

Tuesday, February 2, 2010

MEF: GetExport, GetExportedValue methods compared to Import attribute

(Also posted on the MEF forum)

While writing some automated composition tests today, I found out the hard way that GetExportedValue<Lazy<T>> doesn't do what you might think it does at first sight. For example, the following throws because it tries to get a exported value for the type Lazy<IFoo>, which is not an available part:
   public class Program
      public static void Main(string[] args)
         var catalog = new AssemblyCatalog(Assembly.GetExecutingAssembly());
         var container = new CompositionContainer(catalog);
         var exports = container.GetExportedValue<Lazy<IFoo>>();

   public interface IFoo

   public class Foo : IFoo

As it turns out, if you want to pull a lazy export from a container, you just have to call GetExport<T> or GetExport<T,TMetaData>. This is quite obvious with hindsight, but I just got so used to using the shorthand GetExportedValue<T> that I completely forgot about the existence of GetExport<T>.

This then led me to wonder why the GetExport and GetExportedValue methods work differently from the Import attribute. With the Import attribute, MEF inspects the type you are trying to import and gives special treatment to Lazy<T>. Shouldn't there be an ImportLazy (and ImportManyLazy) attribute instead to make this intention explicit?

Sunday, January 31, 2010

On the emergence of ubiquitous computing

I just read this post about how the iPhone and new iPad herald the era of "new world computing":
In the New World, computers are task-centric. We are reading email, browsing the web, playing a game, but not all at once. Applications are sandboxed, then moats dug around the sandboxes, and then barbed wire placed around the moats. As a direct result, New World computers do not need virus scanners, their batteries last longer, and they rarely crash, but their users have lost a degree of freedom. New World computers have unprecedented ease of use, and benefit from decades of research into human-computer interaction. They are immediately understandable, fast, stable, and laser-focused on the 80% of the famous 80/20 rule.

It is an interesting post, but I don't believe that the lack of multi-tasking and other freedoms is a necessity for "new world computing". If you can make a slick UI for switching between tasks, then you can also make a slick UI for switching between tasks that continue to run in the background.

These limitations are just engineering trade-offs that had to be made to give us an early peek at ubiquitous computing (which is by the way the real term for "new world computing" and was already a research topic long before I went to college). I seriously doubt the next generation of these devices will have the same limitations.

You can wave your hands and talk about how it's all task oriented now all you want, but in the end multi-tasking is a necessity even if only for running a chat client 24/7. And that's exactly what I do with my old and clunky N95 smartphone.

Setting up dropbox on a headless linux system

I have been using dropbox for a while to synchronize files between different computers. It has some pretty impressive bullet points:

  • Seamless syncing. You just put files in the dropbox folder, and they are automatically synchronized to your other computers. No firewall issues. In fact, the only problem I have is at work where dropbox is explicitly blocked. >:-(
  • Easy file sharing over the internet. Just put a file in your public folder, right click, copy public link. You can even host a website on dropbox this way.
  • Cross-platform. It works on linux, windows, OS X and even iPhone.
  • You can access the revision history of your files so it works pretty well as an on-line backup service, even if you delete files by accident.
  • It's completely free if you don't need more than 2GB storage and 30 days of revision history.

I have set up dropbox on my NAS so that I can synchronize my dropbox to a ZFS file system. This way I can combine the advantages of dropbox with the advantages of my NAS:

  • I get to keep snapshots indefinitely, with disk space being my only limitation.
  • I protect my data even if the dropbox service fails disastrously, e.g. because of security breach. Think file deletions being synced to all your computers.
  • I can free space on my dropbox account by moving files on the NAS out of the dropbox folder, yet still keep them safe through my NAS snapshot+backup policy.

Dropbox is targetted at GUI environments, but can still be installed on a headless linux system as described on this wiki page. However, the wiki page did not describe how to change the dropbox folder. I needed this to point dropbox to a folder on my NAS storage pool. It took some minor reverse engineering of the dropbox settings file, but I successfully created a script to do exactly that. I've also added a link and instructions on the wiki on how to use it.

Wednesday, January 27, 2010

Using MEF for classes which take configuration values

We're using the Managed Extensibility Framework (part of the upcoming .NET 4.0) at work for a new project.

I have a few pre-MEF classes which take strings, integers other primitive data types in their constructor. For example, consider the following C# class which tracks recently used resources (e.g. the last files opened by the application) by saving them in a file:

public class RecentlyUsedTracker : IRecentlyUsedTracker
   private readonly string file;
   private readonly int maxItems;

   // constructor
   public RecentlyUsedTracker(string recentlyUsedFile, int maxItems)
      this.file = recentlyUsedFile;
      this.maxItems = maxItems;

   // Marks the given resource as recently used.
   public void Touch(string resource)

The above export doesn't work, because MEF cannot instantiate this class. Obviously it cannot know which string and integer to use as arguments for the constructor.

You can still do it by adding [Import("somename")] attributes to the constructor declaration like this:

    // constructor
    public RecentlyUsedTracker(
       [Import("RecentlyUsedTracker.File")] string recentlyUsedFile,
       [Import("RecentlyUsedTracker.MaxItems")] int maxItems)
However, that makes it much more complex to set up the container. Each configuration value has to be explicitly added to the MEF container with ComposeExportedValue as shown below. Blergh! (correction: see update below!)
var catalog = ... some catalog ...
var container = new CompositionContainer(catalog);
container.ComposeExportedValue<string>("RecentlyUsedTracker.File", @"c:\recentlyused.txt");
container.ComposeExportedValue<int>("RecentlyUsedTracker.MaxItems", 5);

My next idea was then to do something like this for the constructor:

   public RecentlyUsedTracker([Import] IConfigurationProvider configurationProvider)
      this.recentlyUsedFile = configurationFile.GetValue<string>("recentlyUsedTracker.File");
      this.maxItems = configurationProvider.GetValue<int>("recentlyUsedTracker.maxItems");

I'm a bit worried about the fact that I'm importing the configurationProvider object only to use it briefly in the constructor. It's also annoying that I need to mock this service in my unit tests, instead of just passing a value. I've asked on the MEF forum if there is a better way.

Update: turns out there is a better way. My first attempt (adding attributes to constructor arguments) is just fine. It's just explicitly adding configuration values to the container that was the bad idea. As Glenn Block suggested, you can just export the configuration values via properties like this:

public class RecentlyUsedTrackerConfiguration
  public RecentlyUsedTrackerConfiguration()
     //set values here

  public string File {get;set;}

  public int MaxItems {get;set;}

Tuesday, January 26, 2010

Building a NAS, part 6: testing ZFS checksumming

Let's take a look at how ZFS protects data. I plugged in a spare external disk, created two small 1GB partitions on it with fdisk, and set up a ZFS pool for testing:

fdisk /dev/sdc # set up two 1GB partitions
zpool create testpool mirror /dev/sdc1 /dev/sdc2
zfs create testpool/testfs
Note that this is just a test set-up. Normally you should definitely use two separate disks to get the full benefit of mirroring. Also, it doesn't really make sense to slice up disks into partitions.

Smashing bits

Let's create a test file which fills the file system and make a note of the sha1 fingerprint:
cd /testpool/testfs
dd if=/dev/urandom of=testfile bs=1M count=920
# prints a sha1 fingerprint for the file
sha1sum /testpool/testfs/testfile
Now comes the fun part. With a small (and very dangerous) python script, we can corrupt one of the devices by writing some junk data at regular intervals:
openedDevice = open('/dev/sdc1', 'w+b')
interval = 10000000
while (True):,1)
  print str(openedDevice.tell())
When we reread the file after the corruption, ZFS will transparently pick the pieces of data on the healthy disks. Note that in this case the file cannot not be cached in memory because it is larger than the available system memory.
# still prints the correct fingerprint!
sha1sum /testpool/testfs/testfile
Strangely enough, running zpool status testpool doesn't report any errors at this point. I have send a mail to the zfs-fuse mailing list to ask whether this is normal.

To detect and fix the errors, we have to run this simple command:

zpool scrub testpool
# shows progress and results of the scrub
zpool status testpool
To protect against bit rot on consumer grade disks, the recommendation is to run a scrub once a week. In a future post I'll explore how to do that automatically, including some kind of reporting so that I know when a disk is in trouble.

Sunday, January 24, 2010

Building a NAS, part 5: minimizing power consumption

My plan is to let my NAS run 24/7 if the impact on my electricity bill is acceptable. To measure power consumption, I have purchased a power consumption meter that you can plug in between a wall socket and some device. It is one of those tools that can provide hours of quality geek entertainment. So much devices to measure around the house, so little time! :-)

The following table shows the passive power consumption of my NAS after each round of power saving measures. With "passive", I mean that the NAS was not doing anything useful like reading from or writing to the ZFS file systems.

Configurationpower consumption (watts)
under-clocked CPU69
removed AGP video card59
removed PATA CD-ROM drive57
removed unused eSATA RAID1 PCI card55
2 storage disks in standby47

Take it slow

The Athlon CPU and/or the mother board in this box is apparently too old to support dynamic cpu frequency scaling. It also doesn't seem to support the AMD power saving mode which you can control with the athcool package. Instead, I had to get down and dirty with the BIOS settings and set the CPU multiplier to the minimum value. According to cat /proc/cpuinfo, this slowed down the CPU from 2GHz to 1GHz. Profit: -11 watts.

Bare necessities

I had to connect a video card and a CD-ROM drive to install debian Lenny. Now that the server is running and connected to the network, I can throw those out again and manage the system remotely. Fortunately the BIOS supports booting without a video card. I also removed a PCI card for eSATA RAID1 which I initially thought I would need. Profit: -14 watts

Spin down those storage disks

With the hdparm command we can inspect the power state of a disk and configure stand-by mode:

aptitude install hdparm
hdparm -C /dev/sda # prints "active/idle"
# now tell disk to go to stand-by whenever not used for 2min
# note the very strange S-value to time mapping; consult man hdparm!

hdparm -S 25 /dev/sda
sleep 120
hdparm -C /dev/sda # should print "stand-by"
To make the power saving configuration permanent, I added this to /etc/hdparm.conf. Note the use of /dev/disk/by-id to keep the settings correct even if we start changing the NAS hardware:
/dev/disk/by-id/scsi-SATA_ST3500418AS_9VM7RWGV {
spindown_time = 25

/dev/disk/by-id/scsi-SATA_ST3500418AS_9VM7SHA5 {
spindown_time = 25
Profit: -8 watts

Leave the system disk alone

It is much harder to get power savings for the system disk, because it is used all the time for logging and by a bunch of daemons. Trying to put this disk in stand-by will just cause it to frequently spin down and up again.

Instead, I'm using a disk recovered from a dead laptop as the system drive. This requires a cheap 2.5" to 3.5" IDE converter cable. Such a drive is already extremely efficient; the potential savings of putting it in stand-by are negligible (~1 watt).

Cost before and after

( update: corrected kwh cost, my original estimate was about 50% of actual cost because I based it solely on Electrabel's power generation cost, and forgot the distribution costs on my bill - thank you Peter!)

At 80 watt, energy consumption per year was 80 watt * (24 * 365) hours = 700.8 kwh. At an average 0.16 euro/kwh that will cost 112 euros. I have reduced that to 66 euros. This goes to show that a 20 euro energy consumption meter can yield a return on investment rather fast.

Another lesson I'll be remembering from these calculations: for my current contract, I can estimate the yearly cost for the continuous consumption of a device as 1.4 euro per watt.

Saturday, January 23, 2010

Building a NAS, part 4: ZFS on linux with FUSE

I decided to use the ZFS file system for my NAS. Although licensing issues prevent it from being ported to the linux kernel, there is a ZFS-FUSE project which has ported ZFS to run in userspace via FUSE.

ZFS is a mature file system (and tool set) which does device pooling, redundant storage, checksumming, snapshots and copy-on-write clones. It also has a very cool deduplication feature where you can configure the file system to look for identical chunks of data and store those only once. Nice!

Getting ZFS-FUSE on Debian Lenny

We'll install some tools, compile and manually start the zfs-fuse daemon. Note that I use the latest source from the "official" repository here, not the last stable release.

aptitude install git-core libaio-dev libattr1-dev libacl1-dev libz-dev libfuse-dev libfuse2 scons libssl-dev
git clone zfs-official
cd zfs-official/src
scons install

At the time of writing, the "scons install" command doesn't seem to install the debian init script. Also, the debian init script which is part of the source has a small error. We'll take care of that manually:
cd ../debian
nano zfs-fuse.init
# fix the line "DAEMON=/usr/sbin/zfs-fuse"
# it should be "DAEMON=/usr/local/sbin/zfs-fuse"

cp zfs-fuse.default /etc/default/zfs-fuse
cp zfs-fuse.init /etc/init.d/zfs-fuse
chmod +x /etc/init.d/zfs-fuse
aptitude install sysv-rc-conf
# use arrows to scroll down to zfs-fuse
# use arrows and space to enable run levels 2,3,4,5
# use q to quit

Setting up a ZFS storage pool and file systems

Currently I have two new 500GB disks available for storage. My first plan was to split each disk in two partitions to build a "safe" storage pool (mirrored over two partitions) and a "bulk" storage pool (no redundancy, striped over two partitions). However, a recurring theme in the ZFS Best Bractices Guide is that you should not slice up your disks if you can avoid it. Therefore, I'll keep things simple and just create one big 500GB pool of mirrored storage.

# start the zfs daemon
zpool create nas-pool mirror \
/dev/disk/by-id/scsi-SATA_ST3500418AS_9VM7RWGV \
zpool status
I will however, still create two separate file systems in this pool for "archive" and "bulk" storage. This makes it easy to have different backup policies for each data set.
zfs create nas-pool/archive
zfs create nas-pool/bulk
zfs list
zfs mount -a
Because ZFS is designed to handle storage pools with potentially thousands or more file systems, you don't have to manually edit /etc/fstab to set up mount points. The shown mount command will automatically mount all available ZFS file systems as /pool-name/file-system-name. This is also what the init script does.

Exposing the ZFS file systems on the network via samba

First we'll set up a "nasusers" group which has read/write access to the ZFS file system:

# create nasusers group and add a user to it
groupadd nasusers
usermod -a -G nasusers wim

# give nasusers read/write access
cd /nas-pool
chmod 2770 archive
chmod 2770 bulk
chgrp nasusers archive
chgrp nasusers bulk
Now give those users a samba password:
smbpasswd -a wim
Add a section like this to /etc/samba/smb.conf like this for each folder to expose:
valid users= @nasusers
Finally, restart samba:
/etc/init.d/samba restart
Now the ZFS file systems should be available on the network, and users can start copying there stuff in there. In a future post we'll explore how to leverage some of those advanced ZFS features.

Friday, January 22, 2010

Building a NAS, part 3: filesystem doubts

I had planned to demonstrate the creation and administration of a raid1 BTRFS filesystem in this post, but while playing around with BTRFS I ran into a few snags:

  • I was able to mount a normal BTRFS filesystem spanning two devices, but not one in raid1 mode. Then I discovered that I could only mount a raid1 BTRFS filesystem if I gave it a label.
  • I saw some unexplained mount failures on a multi-device FS which disappeared after I mounted once via another device.
  • Though data is checksummed, I couldn't find a way to detect checksum failures other than reading all files and watching output from the kernel with dmesg.
  • I confirmed that in a raid1 setup, BTRFS will still find the good copy after the data on one device is corrupted. However, I couldn't find a way to reliably repair the corruption other than reading and rewriting all files.
These issues didn't exactly give me confidence in the maturity of BTRFS. I knew it wasn't production ready, but I hoped it was close. I'm now pretty sure that it is not.

Perhaps more importantly, I also realized that BTRFS is the GPL-licensed answer to the more mature ZFS. ZFS is a Solaris filesystem developed by Sun. It can't be ported to run in the linux kernel because of licensing issues, hence the need for BTRFS. However, BTRFS is mainly sponsored by Oracle, and Oracle is buying Sun. If Oracle's motivation to sponsor BTRFS was to counter Sun's open source efforts, then the Sun deal takes away that motivation. On the other hand, if their motivation was truly to get a next-generation filesystem in linux, then they might as well relicense ZFS under the GPL. Chris Mason gave this vague comment when asked about ZFS and the Sun deal:

Chris: Sun has many interesting projects, and I’m looking forward to working with their R&D teams. We’re committed to continuing Btrfs development, and ZFS doesn’t change our long term plans in that area.
That's no rational explanation why Oracle would continue to support both projects, so I'm skeptical.

Meanwhile, that leaves my little build-a-NAS project stalled. I see these options:

  • use BTRFS anyway, even though it's not yet mature and its future seems unsure
  • run OpenSolaris and ZFS
  • run the FUSE port of ZFS on linux, which dodges the licensing issue by running the FS in userspace (presumably at the cost of performance)
  • use software raid + LVM on linux
I'm not sure at all which direction to take.

Sunday, January 17, 2010

Building a NAS, part 2: getting BTRFS on Lenny

update: after discovering that BTRFS isn't as mature as I hoped, I switched to ZFS-FUSE. You might want to read my post on setting up ZFS instead.

The debian "lenny" release comes with version 2.6.26-2 of the linux kernel. This kernel does not yet have support for BTRFS, so we'll download, compile and install the latest stable kernel release.
# install some required packages as root
aptitude install install bzip2 fakeroot kernel-package libncurses5-dev zlib1g-dev

# download and extract linux kernel
tar -xvjf linux-
cd linux-

# copy existing kernel configuration from /boot
cp /boot/config-2.6.26-2-686 .config

# edit kernel configuration (navigate with arrows, toggle options with space)
# - under "File Systems, enable "Btrfs filesystem (EXPERIMENTAL)"
# - under "Virtualization", disable "Linux hypervisor example code"
# You can also take this opportunity to optimize the kernel for your CPU
# architecture under "Processor type and features" - "Processor family"
# Examine the output of "cat /proc/cpuinfo" if you're not sure of your CPU.
make menuconfig

# build kernel (this takes a while, especially on old machines)
make-kpkg --rootcmd fakeroot --initrd linux-image linux-headers

# install new kernel packages and reboot
cd ..
dpkg -i linux-image-
dpkg -i linux-headers-

If everything went well, the system should boot up under the new kernel. If something goes wrong, you still have the option of booting under the old kernel by using the grub menu at startup.

Now we have a kernel with support for the btrfs filesystem, but still no userspace tools to use it. We'll download, compile and install the latest version of those tools:

aptitude install git-core uuid-dev e2fslibs-dev libacl1-dev
git clone git://
cd btrfs-progs-unstable
make install

Now we have mkfs.btrfs to create a BTRFS file system, and some other tools to manage such a filesystem. We'll start playing around with those in the next post.

Building a NAS, part 1: installing Lenny and introducing BTRFS

I'm building a NAS from spare parts. The basic system is an 8 year old PC. My operating system of choice is the latest stable release of debian gnu/linux, codename lenny. I would like to use the (still experimental) btrfs file system on this box for the following reasons which make it a good data-haven:

  • It can do raid1-like mirroring of data over multiple devices. This makes it resilient against failed disks.
  • It can make copy-on-write mountable snapshots of a volume. By regularly making snapshots (e.g. from a cron-job) you can keep old versions of your data without wasting any space on identical copies.

Of course, this still won't fully protect my data. There could be hardware or software errors that destroy the data on all disks. There could be circumstances that destroy all disks together, like fire or lightning. So I'll still have to make an off-site backup every now and then.

To install lenny, I downloaded the 40MB businesscard CD image. Despite its small download size, this CD still has a user-friendly graphical installer. After letting the installer do its thing, I still did the following:

  • ran "tasksel" to install "file server" related packages
  • ran "aptitude install openssh-server" to enable remote access

At this point I have a basic linux system that I can SSH into. Unfortunately "lenny" does not come with btrfs support. We'll fix that in the next post...

Monday, January 11, 2010

Registratierechten te betalen bij het kopen van een woning in Vlaanderen

This post is in Dutch because it is about law and taxes in the Flemish Region of Belgium.

Deze post hoort thuis in een serie over de aankoop van ons huis. Eerder in deze serie:
Beëindigen van een huurovereenkomst door de huurder in België

Wanneer je in het Vlaamse gewest vastgoed koopt, moet je daarop registratierechten betalen. Deze rechten zijn vastgelegd door het Wetboek der registratie-, hypotheek- en griffierechten. Artikel 44 geeft aan dat de registratierechten 10 procent bedragen:

Het recht bedraagt 10 ten honderd voor de verkoop, de ruiling en iedere overeenkomst tot overdracht onder bezwarende titel van eigendom of vruchtgebruik van onroerende goederen.

Vermindering tot 5 procent

Volgens artikel 53 wordt dit verminderd tot 5% voor kleine landeigendommen en bescheiden woningen. Dit is vastgoed waarvan het kadastraal inkomen lager is dan een zeker maximum, vastgelegd door een apart Koninklijk Besluit. (Ik vind spijtig genoeg geen link naar het KB.)

Dit staat ook wel bekend als klein beschrijf. Er zijn nog bijkomende criteria vastgelegd door artikels 54-61; zeer droge kost, en aangezien de woning die Elke en ik gaan kopen toch niet in aanmerking komt, ga ik hier niet verder op in gaan.

Vermindering op het bedrag waarop die 5 of 10 procent berekend wordt

De bovenstaande percentages worden berekend op de prijs overeengekomen tussen koper en verkoper. Volgens artikel 46bis kan echter een deel van dit bedrag vrijgesteld worden, en die vrijstellingen mag je dus van de prijs aftrekken voor je die 5 of 10 procent berekend.

Er wordt 15.000 euro vrijgesteld als je het vastgoed koopt als hoofdverblijfplaats:

De heffingsgrondslag ten aanzien van de verkopingen, zoals bepaald in de artikelen 45 en 46, wordt verminderd met 15.000 euro in geval van zuivere aankoop van de geheelheid volle eigendom van een tot bewoning aangewend of bestemd onroerend goed door een of meer natuurlijke personen om er hun hoofdverblijfplaats te vestigen.

Deze vrijstelling heeft nog extra voorwaarden:

  • je mag nog geen onroerend goed bezitten
  • je moet uitdrukkelijk vragen om deze korting te krijgen (!)
  • je moet er binnen de 2 jaar gaan wonen (5 jaar in het geval van bouwground)

Als je in aanmerking komt voor bovenstaande vermindering, wordt er nog 10.000 euro extra vrijgesteld als je voor de woning een lening aangaat (of zelfs 20.000 euro indien je in aanmerking komt voor de vermindering tot 5 procent):

Als met het oog op de financiering van een aankoop, vermeld in het eerste lid, een hypotheek wordt gevestigd op het aangekochte onroerend goed, wordt het bedrag van de vermindering van de heffingsgrondslag, vermeld in het eerste lid, verhoogd met hetzij 10.000 euro als op de aankoop het recht, vermeld in artikel 44, verschuldigd is, hetzij 20.000 euro als op de aankoop het recht, vermeld in artikel 53, verschuldigd is

In ons geval komen we in aanmerking voor beide verminderingen, dus we mogen 25.000 euro van de verkoopprijs aftrekken voordat we de 10 procent registratierechten berekenen. Dat "bespaart" ons dus 2500 euro.

Sunday, January 10, 2010

Beëindigen van een huurovereenkomst door de huurder in België

This post is in Dutch because it is about Belgian law and I live in the Flemish region. As far as I know the referred Belgian law is not published in English, so I had to pick another language.

Aangezien Elke en ik net een verkoopsovereenkomst voor een huis getekend hebben, hebben we de eerstvolgende maanden heel wat te doen. Een van die dingen is het opzeggen van de huur.

Onze huurovereenkomst is een typisch contract van 9 jaar. Zo'n contract wordt ook wel "een 3-6-9" genoemd in de volksmond omdat de huurwet bepalingen bevat waarin sprake is van driejarige periodes. De spelregels voor het vroegtijdig beeindigen van zo een overeenkomst worden vastgelegd door de Belgische wetgeving in het Burgerlijk Wetboek, "Regels betreffende de huurovereenkomsten met betrekking tot de hoofdverblijfplaats van de huurder in het bijzonder" en het bijhorende Koninklijk besluit. Een langere maar meer leesbare interpretatie van deze wetteksten is te vinden in de brochure "De Huurwet" (10e editie juli 2008) van de Vlaamse overheid.


Een contract van 9 jaar kan altijd door de huurder opgezegd worden, maar de opzeg moet wel minstens 3 maanden op voorhand gebeuren:

Art. 3, § 5 De huurder kan de huurovereenkomst op ieder tijdstip beëindigen met inachtneming van een opzeggingstermijn van drie maanden.

Die termijn gaat in vanaf de eerste dag van de maand die volgt op de opzegging:

Art. 3, § 9. In alle gevallen waarin een opzegging te allen tijde kan worden gedaan, neemt de opzeggingstermijn een aanvang de eerste dag van de maand die volgt op de maand tijdens welke de opzegging wordt gedaan

Vreemd genoeg staat er iets anders in het KB, bijlage "HUUROVEREENKOMSTEN VOOR WONINGEN GELEGEN IN HET VLAAMS GEWEST". Het lijkt erop dat de woorden "de maand die volgt" verdwenen zijn (benadrukking toegevoegd door mezelf):

In alle gevallen waarin de opzegging te allen tijde kan worden gedaan, neemt de opzeggingstermijn een aanvang de eerste dag van de maand tijdens welke de opzegging wordt gedaan.


Als de huurder het huurcontract opzegt tijdens de eerste 3 jaar, dan heeft de verhuurder recht op een opzeggingsvergoeding. Ook uit Art. 3, § 5:

Indien de huurder de huurovereenkomst evenwel beëindigt tijdens de eerste driejarige periode, heeft de verhuurder recht op een vergoeding. Die vergoeding is gelijk aan drie maanden, twee maanden of één maand huur naargelang de huurovereenkomst een einde neemt gedurende het eerste, het tweede of het derde jaar.