You are currently browsing the archive for the Storage category.

Not so fast folks!


As always, we’re trying different things. I recently used some older parts and created a Server with 2.6.39 (Debian) that has a 3ware card and a Btrfs filesystem. LVM was placed on top of the 3ware partition and then Btrfs was put on top of that. The storage was shared out via NFS for an ESXi host.

I was expecting the system to work okay and it did for about a month. Eventually the volume needed more space, so the Btrfs volume was resized and life continued. About a week ago the system performance became unacceptable. VERY slow, high load averages, 80+% IO wait, etc. In addition, the VM would no longer boot because ESXi was complaining about disk IO timeouts! I was definitely experiencing some major performance issues. All of the hardware I was using was pretty much old, but tried and true. Being Btrfs was new on the scene and the strange behavior the kernel was displaying, I suspected the shiny Btrfs. I created a new Reiserfs (v3) volume, which in the past (and apparently still is) was my file-system of choice. I copied the VMDK and associated goodies to the new volume, shared it out, and ESXi was in heaven again.

So, what I have learned is:

1. Btrfs has some issues with NFS. The performance appeared perfecetly fine coping the data from Btrfs to Reiserfs, but accessing the data via NFS was awful.

2. Btrfs performance degrades over time. Initially the VM worked great. Over time, performance degraded until it was basically no longer usable.

I haven’t yet determined exactly why I experienced these issues with Btrfs, but Reiserfs seems to have solved it for now. I’ll definitely be looking for clues in the coming months.

Lastly, some data on backup times the VM logs. (The VM is responsible for backing up other servers. These times are for one small server that is backed up to the VM’s storage). The last 1 minute time is after switching to Reiserfs.

Duration: 1 min.
Duration: 1 min.
Duration: 1 min.
Duration: 0 min.
Duration: 1 min.
Duration: 1 min.
Duration: 1 min.
Duration: 3 min.
Duration: 1 min.
Duration: 8 min.
Duration: 10 min.
Duration: 39 min.
Duration: 12 min.
Duration: 59 min.
Duration: 109 min.
Duration: 66 min.
Duration: 3 min.
Duration: 240 min.
Duration: 297 min.
Duration: 298 min.
Duration: 657 min.
Duration: 375 min.
Duration: 2140 min.
Duration: 1 min.

About a year ago, a client needed a NetApp (NFS server), but the IPO wasn’t there yet, and the startup’s balance sheet was still in the Red. Needless to say after a few trials, we ended up with Open-E DSS because of budget constraints.

Oh what a roller-coaster it has been…

Open-E is NOT a viable NFS solution as they claim. Based on our initial performance tests and configurations, everything went well, but the more we tried to use the system as a production NFS server, the more bugs we found and the more frustrated we became. After my experiences, I think Open-E may have a life as an iSCSI or basic Samba server, but if you’re looking for reliable, production level NFS storage, you’d be better off installing something like CentOS/Solaris and rolling your own. Open-E has it’s market, but their target market is obviously much too broad.

Some of the serious issues with Open-E as of about 2 months ago:

  • Support – Their US support staff doesn’t know much about UNIX or NFS
  • UPS connection – If configured with apcupsd, a UPS self-test causes the system to shutdown
  • NFS locking – After going through two releases claiming to have NFS locking patched, it wasn’t and required a separate patch from Germany. I reported it to Open-E Jan 08 and one year later, people are still complaining its not fixed (because its not).
  • Backup – If you want to backup this unit, you’re best bet is NFS mounting it and not using the included agents
  • NFS root squash options – They don’t work with certain path configurations
  • YP/NIS – No useful support. Forget it
  • Quotas – It supports quotas, but they have to be modified by using the web UI
  • Web UI – For a few production software releases, the web UI was unusable
  • Monitoring – No SNMP or monitoring is possible
  • Active Directory Integration – Partial integration. Does not work with services for UNIX

Based on the above, I don’t think anything needs to be said other than: Do you think their users are the QA team?

The current status of the Open-E box is “don’t touch”. We’re looking to dump the Open-E software as soon a feasible. Its an unfortunate lesson, but luckily Sun has a solution. Since the Open-E debacle, ZFS has been given a similar run through and has passed with mostly ooo’s and aww’s. Migrating 5 TB server to a new filesystem and operating system is not a quick-and-dirty project.

Sometime in the near future you’ll see an Open-E DSS module on ebay. $1 is all I ask :)