Hard Directory Links in macOS

My friend and colleague Dennis Bell hit me up the other day with an odd problem:

hey, I have a problem

apparently perl allows you to create hard links to directories on OSX, but I have no idea how to get rid of them...

Here's the stat output he sent me:

$ stat -f "%d/%i %Sp %N" copy ../src/orig
16777220/30433678 drwxr-xr-x copy
16777220/30433678 drwxr-xr-x ../src/orig

Sure enough, two directories, same root device (16777220), and the exact same inode.

This is a huge no-no in filesystem land, if only because preventing hard links eliminates an entire class of unsolvable problems with disk traversal. The tl;dr is that without hard directory links, you can't get cycles in the directed graph of parent -> child relationships. Cycles make graph traversal dangerous, because of the danger of infinite loops.

(Note: symbolic links sidestep this issue by letting the traversal logic know who is the real directory and who is the link; the links can then be skipped.)

In fact, it's so taboo, that the Linux kernel flat out refuses to let you hard link one directory to another. This is what fascinated me so much about my friends problem - How in blazes was this even possible??

A Detour to Linux-Land

Since I have spent a fair amount of time poking about in the Linux kernel source tree, and because I have years of experience writing system code to run on top of Linux, I figured I'd start there.

(This is based on commit 7eb97ba from Linus' tree)

→  grep -rn SYSCALL_DEFINE * | grep '\blinkat'
fs/namei.c:4239:SYSCALL_DEFINE5(linkat, int, olddfd, const char __user *, oldname,

So linkat(2) is implemented in fs/namei.c, but since it is a long function (by blog standards), I'll not repost the whole thing here. You are more than welcome to read the full listing here.

At some point during execution, once it has figured out just what you're trying to link up, linkat(2) calls vfs_link(...), which checks to see if the target of the link is a directory (line 4200):

if (S_ISDIR(inode->i_mode))
    return -EPERM;

Linux, unequivocally, forbids the creation of hard links to directories. It's hard-coded into the kernel. It's not configurable. It's not filesystem-dependent. It's the way things are.

A quick git blame on the namei.c source file turned up a long history:

7e79eedb3b2 (Tetsuo Handa   2008-06-24 16:50:15 +0200 4199) if (S_ISDIR(inode->i_mode))
^1da177e4c3 (Linus Torvalds 2005-04-16 15:20:36 -0700 4200)     return -EPERM;

7e79eedb was a variable-reuse patch to clean up the code slightly.

1da177e4 is the initial import of Linux kernel 2.6.12-rc2 into Git. Since I've already invested a sizable amount of time in this particular science expedition, I started looking at old tarball dists of Linux from kernel.org.

In Linux 2.5.5, Al Viro migrated the Big Kernel Lock from vfs_link() to the filesystem-specific i_op->link() handler. At the same time, he hoisted the S_ISDIR() check up into vfs_link(), effectively deciding for all filesystems that links to directories are verboten. Here's the Changelog entry:

<viro@math.psu.edu> (02/02/14 1.345)
  [PATCH] (3/5) more BKL shifting

  BKL shifted into ->link(), check for S_ISDIR moved into vfs_link().

Back to Apple-Land

Having verified my assumptions regarding directory hard links in Linux, it was time to try to reproduce the "issue" on macOS. Dennis was using Perl when he ran into this, but since this is firmly in kernel-system-call territory, I'm going to use C. Here's a small program I wrote that (thinly) wraps the link(2) system call:

#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <errno.h>

int main(int argc, char **argv)
{
    int rc;

    if (argc != 3) {
        fprintf(stderr, "USAGE: %s /old /new\n", argv[0]);
        return 1;
    }

    rc = link(argv[1], argv[2]);
    if (rc != 0) {
        fprintf(stderr, "%s -> %s: %s (error %d)\n",
                                        argv[2], argv[1], strerror(errno), errno);
    } else {
        fprintf(stderr, "%s -> %s: SUCCESS!\n", argv[2], argv[1]);
    }
    return rc;
}

And here's what happens when you run it:

$ mkdir dir1

$ ./lnk dir1 dir2
dir2 -> dir1: Operation not permitted (error 1)

On the surface, it would appear that macOS does not allow directory hard links. But Dennis assured me he had seen it, with his own eyes, so I tried again:

$ mkdir dir1
$ mkdir copy

$ ./lnk dir1 copy/dir2
copy/dir2 -> dir1: SUCCESS!

Now that is odd. I checked the man page for link(2), which states:

In order for the system call to succeed, path1 must exist and both path1 and path2 must be in the same file system. As mandated by POSIX.1, path1 may not be a directory.

A bald-faced lie, it would seem.

I have literally zero experience reading through Apple's darwin codebase, so rather than a code dive, I spoke with a few colleagues. One of them, whose Google-fu is stronger than mine, found this StackOverflow post, which hints that Apple implemented it in OS X 10.5 Leopard, for their Time Machine product, in 2007.

Another SO question (referenced by the first) sheds a little light on the ground rules for this feature:

Snow Leopard can create hard links to directories as long as you follow Amit Singh's six rules:

  1. The file system must be journaled HFS+.
  2. The parent directories of the source and destination must be different.
  3. The source’s parent must not be the root directory.
  4. The destination must not be in the root directory.
  5. The destination must not be a descendent of the source.
  6. The destination must not have any ancestor that’s a directory hard link.

(Note: I believe the quote is referring to the author of Mac OS X Internals: A Systems Approach, Amit Singh)

Living Double Lives

So OS X (now macOS) allows you to hard link directories under specific circumstances that are guaranteed to not cause cycles in the filesystem graph. Neat. Unfortunately, standard CLI utilities (BSD or GNU) seem to be caught a bit off-guard by this newfound power.

Consider GNU coreutils, which I brew install on every Mac I've ever owned:

$ which rm
/usr/local/opt/coreutils/libexec/gnubin/rm
$ rm copy/dir2
rm: cannot remove 'copy/dir2': Is a directory

$ which rmdir
/usr/local/opt/coreutils/libexec/gnubin/rmdir
$ rmdir copy/dir2
rmdir: failed to remove 'copy/dir2': Directory not empty

BSD utils has the same issue:

$ /bin/rm copy/dir2
rm: copy/dir2: is a directory

$ /bin/rmdir copy/dir2
rmdir: copy/dir2: Directory not empty

My initial advice to Dennis was to use unlink(1), since he was trying to undo the hard link, and that's precisely what unlink is for. In fact, once I had reproduced the issue on my laptop, I was able to fix the problem by unlinking the duplicate inode. When he tried it, it said:

$ unlink copy/dir2
unlink: copy/dir2: is a directory

As it turns out, stock unlink is just a wrapper around /bin/rm that just doesn't take any arguments:

$ /bin/unlink
usage: rm [-f | -i] [-dPRrvW] file ...
       unlink file

The GNU coreutils version of unlink doesn't have this problem, apparently.

Ultimately, Dennis took the nuclear option and rm -rf'd his way back to a sane filesystem. By removing everything under one of the directory instances, he was able to rmdir the other one and start over.

Diff'rent strokes, I suppose.

In Closing

I found this ordeal fascinating; I hope you did too. Here's a few things I've learned.

  1. macOS (HFS+) allows directory hard links
  2. The man pages lie to conceal that
  3. The stock coreutils cannot cope with that
  4. Always ALWAYS install GNU coreutils

Happy Hacking!

James (@iamjameshunt) works on the Internet, spends his weekends developing new and interesting bits of software and his nights trying to make sense of research papers.

Currently working on Bolo.