You appear to be using an outdated browser. For the best experience on this site, please install Chrome Frame, update to Internet Explorer 8 or later, or use another browser like Chrome or Firefox.

ZFS for Research Data Storage

For the IT section of the 2016 XNAT Workshop, I’m releasing a white paper on using ZFS for research data storage.

You can download it here:

ZFS for Research Data Storage – White Paper 1.0

Posted in Uncategorized, ZFS Storage | 1 Comment

Announcing the 2016 XNAT Workshop

The XNAT development team will be hosting a 5-day workshop in St. Louis, Missouri, June 6 – 10, 2016. The workshop will include presentations and hands-on practical sessions covering the full range of XNAT functionality. We invite everyone from XNAT novices to experts to join us for a week of lectures, practicals, breakouts, and coding sessions.

By the time of this workshop, the new XNAT 1.7 version will have been released. Here is a high-level summary of the topics we plan to discuss:

Day 1: Introduction / Hackathon. New users get an introduction to XNAT, with an emphasis on what’s new in XNAT 1.7. Experienced users can work with XNAT developers on pet projects in a “hackathon” environment.

Day 2: XNAT Dev Ops. Best practices for installing and operating a stable and scalable XNAT system.

Day 3: XNAT Computing. Learn about advances and techniques in data processing using XNAT data.

Day 4: XNAT Programming. Insights and best practices for adding new features to XNAT.

Day 5: Impromptu Sessions / Hackathon. Start or continue a new development project, or attend breakout discussions of topics that arise during the week.

Learn more and register now on the XNAT Workshop site.

Posted in XNAT Workshop | Leave a comment

Duplicatinator 2.0

The Human Connectome Project generates A LOT of data that needs to be distributed for other researchers.   It was clear before the project even started downloading a zip from XNAT would never be practical, so other approaches were sought out.

For those with high speed Internet connections (100Mb/s+) the project licensed Aspera to accelerate downloads around the world.   When that still is not practical for those wanting to work on the data, Connectome in a Box was devised.  Think sneaker net.

The current distribution of 500 subject currently takes five 4TB hard drives to distribute the data.  This is soon growing to 900 subjects and finally to 1200 subjects.   Fortunately 6TB drives are now out and 8TB and 10TB to follow.   But that’s not enough we get lots of orders and have to duplicate hard disks frequently.

Standard hard drive duplicators are one to many copies, meaning we would either need multiple duplicators or a single run would be dependent on many cycles of the duplicator.  Beside the rather high price of pre-built duplicators the logistics didn’t match our needs, so we built our own.

Our first revision was build from an Addonics 9 bay tower that held up to 15  hard drives.  It was connected to a Dell 990 via an LSI SAS 9201-16e HBA.   After a lot of work on methodology we got to a system that has worked quite well.   The scripts we use are on Bitbucket.  The only complaint has been the Addonics drive bays.   The fans have all gotten very noisy and only 4 of the 5 bays are useable because of clearance problems with the punched and threaded mounting holes.

In order to handle even more copies version 2.0 has been built.   Using two Addonics Storage Towers, two Intel SAS expanders, and eight Icy Dock drive bays, the system was expanded to 40 drive bays attached to one workstation.  The version 1.0 has been salvaged for SAS cables and pass-thrus.

IMG_20150120_094138 IMG_20150121_094335

Since the drives max out around 120MB/s, this should only be slightly over subscribed to the four SAS lanes feeding each tower.

Version 1.0 took about 36 hours to run a duplication and verification with 2 source drives and 10 target drives.   When we have numbers I will update this post on the performance of version 2.0.  By the specs this should perform comparably with up to 40 drives.   We may find that that other bottlenecks surface that may need addressed with additional hardware.

Posted in Uncategorized, XNAT Hardware and IT | Leave a comment

XNAT, Heartbleed, and you

The impact of the Heartbleed vulnerability in OpenSSL has been much discussed. How does it affect XNAT users and administrators? Fortunately, our risk exposure is extremely narrow. Lead XNAT developer Rick Herrick wrote a quick note on the Water Cooler discussion page of our developer wiki.

The Heartbleed exploit does not affect XNAT directly! XNAT does not use OpenSSL internally or in the application at all.

This is not to say that it may not affect your XNAT installation. You are at some risk if you use one of the exploitable versions of OpenSSL to provide an HTTPS connection at your Tomcat or HTTP proxy (e.g. Apache HTTPD, nginx). It would be possible for an intruder to open the encrypted connection between a user’s browser and the server. Once that is available, the intruder could set the user’s login credentials in the HTTP transaction as the user logs in. They would also be able to monitor the contents of that traffic, potentially exposing PHI or other identifying information from the XNAT installation.

Read the full post here:

Posted in General Announcements | Leave a comment

XNAT 1.6.3 Release Details

XNAT 1.6.3 has been released! This is our most heavily-tested release to date, and updates have been made to series importing, the prearchive, administrative tasks and other features.

A complete list of XNAT 1.6.3 updates can be found here:

You can download XNAT 1.6.3 here:

Posted in XNAT Releases | Leave a comment