Abstract background art

All the bits in a bucket


Phil Rhodes sets out to build a budget data recorder that will record 1080p/24 4:4:4 10-bit signals in the field, using only relatively inexpensive off-the-shelf components.

First published in Showreel magazine, February 2007

HDV is great. Don’t get me wrong; nothing has more successfully given the lie to the phrase “you don’t get something for nothing” during the time I’ve been working with video. I’m just one of those awkward perfectionists who’s never satisfied with the thought that there’s over a 133MB a second of data coming off the chip block in a Canon XL H1, but when I post HDV, I have access to about four of them.

Of course, due to clever, pencil-wielding people in small, darkened university laboratories – the kind of people who use terms like “discrete cosine transform” without batting an eyelid – the situation doesn’t seem all that bleak when you look at the pictures. HDV will, in specific circumstances, rival digital betacam as a tape format, even if the attached cameras often don’t. This is a frankly spectacular achievement at a consumer price point, and a rare example of market forces doing something good for the world. Of course, there has to be a ‘but’. Problems set in when you want to do something clever with your material in post production. I’m sure you could get an uncompressed HD original of a feature film, transfer it to HDV, project that, and get reasonably watchable results. But could you, with sufficient certainty, shoot the feature film on HDV, put it through the same wrangling in post production, and end up with something similarly watchable? Compression is not sufficiently transparent; it is not symmetrical, and reintroduces the long-gone (and not lamented) concept of generation loss to digital media.

To make HDV look as good as it does, the designers used all kinds of knowledge about how human eyes work, from the fact that we see detail in brightness and not in colour, to the fact that shadows are less visible than midtones. If you start messing around with the image – changing colours, resizing, adding effects and titles – those assumptions begin to break down. Maybe that shadow isn’t so shadowy any more, or that edge that wasn’t very red is now much redder. I’m sure I’m not telling most readers anything they didn’t, even subconsciously, already know, but it’s exactly because of this sort of concern that HDV is fantastic for high-definition newsgathering, even documentaries, but for anything that’s going to be graded in post production, access to more of that hundred-plus megs a second of data may be somewhere between ‘useful’ and ‘essential’.

Now that manufacturers have become smart enough to start putting HD-SDI outputs on their cameras, it’s become possible. Okay, I know of no HDV camera that has every bit of CCD data available on its HD outputs, but 1280x720 8bit uncompressed is a much better deal than 1280x720 8bit squeezed into an HDV stream. More about cameras anon, but the ability to record this data is attractive in the extreme.

As I say, this is not particularly new thinking. Image compression has been decried since it was first used to side-step the unfortunate fact that it’s very hard to get magnetic tape to record uncompressed pictures. Uncompressed recording is not exactly a new desire, and it’s become so easy enough to do in standard def that finishing SD material as anything other than uncompressed has been rare for quite some time. Because changes require everyone to buy a new TV, the standards that are actually used for broadcast evolve relatively slowly, certainly much more slowly than Moore’s Law drives advances in desktop computers. Therefore, it was becoming, in real terms, much cheaper and easier to deal with uncompressed digital video on really quite modest desktop systems.

Changing over to HD effectively reset the clock on this. As a ratio of price to performance, I’d estimate that handling of HD video is currently at about 1998 standard-def levels; we’ve been dragged back almost a decade by the sudden need to handle four times as much data. Shortcuts, fudges and quick solutions have abounded, mainly because people didn’t want to lose the ease and convenience that had been so hard won with desktop SD production.

And then, of course, nit-pickers like me came along, looked at all the fudges, shortcuts and half-arsed jobs, and pointed out all that was wrong with them. Perhaps the easiest example of this is the DVCPRO-HD codec, commonly quoted as storing video as a 100Mbit data stream. In fact, it only does that when you’re recording at 60fps. The codec compresses each frame identically, regardless of the frame rate. So, at 24fps, a DVCPRO-HD stream actually runs to only 40Mbit – yes, the world’s favourite HD codec stores less data than standard-def DVCPRO-50. I absolutely don’t mean to victimise DVCPRO-HD here – without it, there would be no HVX200 and no Varicam, both of which are interesting pieces of equipment with unique abilities. HDCAM is just as bad, if a little more complicated to explain, but the point here is that HDCAM has a mere few hundred colour samples per line, by the time you’ve taken all the compression and subsampling into account. So what’s going on here? I was promised 1920x1080, two megapixels a frame, and 24 of them every second. This was not the deal!

So what does it actually take to do HD? Real HD, not this fudged, subsampled, interpolated, approximated stuff we’ve become so used to that it seems like the real thing. Well, since Sony worked so hard on HDCAM SR, you can actually have very nearly uncompressed pictures on tape. 880Mbit/s is a lot, and certainly a grand achievement for a collection of spinning magnets and rust-coated sellotape. In 4:2:2 YUV on the SRW-1 field recorder (only) you can even record totally uncompressed. The 4:4:4 mode is very slightly MPEG-4 compressed, so you can record either uncompressed or without colour subsampling, but not both.

OK, so HDCAM SR is pretty convincing in this mode, but in my ideal world, I’d like to do away with compression and subsampling altogether. The SRW-1 HDCAM SR field recorder has a list price of some £32,000, and only the field deck supports the 4:4:4 mode at present. Record something to SR, and not only do you need the deck to record it, you need the VTR to play it back, and that means a very, very big ticket item every time you want to even look at your material. Not a problem of course if you have the budget for a multiple Genesis, D-20 or F900 shoot, but daunting if you’re an indie filmmaker shooting HDV.

An alternative to HDCAM SR, and something that is done all the time in post production, is to record to a hard disk array. This approach is a favourite in edit suites, where a truly gigantic hard disk array is teamed with the biggest, scariest workstation possible, so as to better deal with having to do four times as much rendering work as there would be for SD. These systems aren’t really packaged for location work, though; dragging an Avid Nitris out into the field has, I believe, been done, but it wouldn’t be my first choice of hardware for that summit-of-Everest location. In any case, there’s a lot of bulky, power-hungry hardware there, targeted at edit and rendering work, which a simple recorder doesn’t need to have.

The traditional way to make a reliable HD edit workstations was with one of the big server motherboards from someone such as Supermicro, hosting a couple of Intel’s top-of-the-line Xeon processors, and an upscale disk controller. Until recently, this would most usually have been a SCSI or fiberchannel link to an offboard RAID; since the advent of serial ATA disks, companies such as 3ware have begun offering controllers that provide sufficient performance with a cluster of eight hard disks.

It’s important to realise at this point that there’s not a lot in common between one of these large, extended form factor server boards and the computer on your desktop. For a start, until recently most PCs had only the most basic form of PCI bus to host peripherals, and on this bus can flow a maximum of 133MB/s, shared between all the devices. There was, therefore, no way to build an uncompressed 4:4:4 HD edit station on a desktop PC until quite recently. Those disk controllers will work, but at rates crippled by the bus they’re connected to.

The upscale boards generally have extended PCI buses, often including more than one bus and doubling the data transferred per clock to 64 rather than 32 bits, as well as increasing the clock rate from 33 to 66 or 100MHz. This makes HD, even uncompressed 2K, more than feasible with two controller cards on two separate buses. The problem with this approach, which has been used to produce disk recorders packaged for on-set use, is that it’s bulky, power-hungry and fails to take advantage of much more elegant solutions offered by more modern hardware.

It’s no secret that the development of this sort of hardware is largely driven by computer games. The computer industry had no interest in providing home users with even PCI buses (which would allow desktop machines to compete with workstations) until computer gaming, particularly competition with games machines using custom hardware, demanded the performance. The latest manifestation of this is the PCI express, or PCIe, bus, which most currently available computers use to connect all the plug-in parts together. PCIe, which you can read all about on Wikipedia if geekdom attracts, is monstrously fast; easily fast enough to transport HD video.

The modern approach

But don’t take my word for it. Let’s try and prove it by building our own DIY data recorder. Look at what we actually need:

• an HD-SDI interface, with which to connect our computer to the camera;
• a disk array capable of handling the data in real time;
• a host system, and
• software to tie it all together.

The first item is almost a no-brainer. AJA Video Systems and Blackmagic Design have been the watchwords in this field for some time, and they’ve both recently released versions of their hardware compatible with the PCIe bus. You can pick your requirements here to a great extent; if you’re only ever going to be recording from a Canon XL G1 or JVC HD250 camcorder, you don’t need dual-link HD-SDI. If you’re thinking of renting an Arri D-20 or Grass Valley Viper, you do. And for those of you who’ve been following the conspicuous prepublicity for Jim Jannard’s Red One camera, if it gets the dual-link SDI we’ve been promised, you’ll probably want dual-link for that too.

Dual-link allows HD-SDI to transport 1080p HD images as full RGB pictures, without resorting to 4:2:2 subsampling. Either your camera of choice will do it, or it won’t; the upsides of dual-link are obvious, and the downside is limited to another connector to plug in. For the purposes of this investigation, Blackmagic Design supplied us with its Multibridge Extreme, a very capable device with wide ranging features to record, process and convert both SD and HD video. AJA gave us the Xena 2Ke, which works in a rather different way, but has a broadly similar featureset as regards recording. I’m not going to go into the specifics of each card – you can see that on the manufacturers’ websites – but there are a few points worth going over here.

The Multibridge is in some ways a more suitable piece of hardware in that it has a DVI output suitable for very sharp, very low-cost monitoring on a TFT computer monitor. It’s also capable of working both as a PCIe device attached to the computer, and as a standalone format converter, although this doesn’t have a lot of relevance to the recorder role. However, because of this, the Multibridge exists as a 1U rackmount unit ideally suited to being bolted into the back of a racked-up recorder, whereas the rack mount breakout for the Xena 2K is an optional item. However, for recording on set, the Xena is certainly the more physically compact option if you don’t want a rackmount breakout, as it would fit entirely inside the computer and allow you to connect directly to the backplane of the card. I’m not entirely sure I’d want a long dual-BNC line hanging on the miniature connectors it uses, though.

Speaking of which, connectivity is very comprehensive on both units, although the Xena doesn’t offer analogue audio inputs. Since most sound recordists still mix in analogue, regardless of how they actually record it, this perhaps makes the Xena somewhat less ideal for productions wishing to take advantage of single-system sound recording. The Multibridge, however, brings all its digital audio connectivity out on one 25-pin D connector, requiring a custom cableform.

Of course, all this finely-etched silicon is a complete waste of time until it’s been told what to do. AJA supplies the Machina software application with the Xena 2Ke, meaning you’re ready to begin recording right out of the box, whereas Blackmagic expects you to pair the card with software such as Premiere Pro. The Premiere option has its own advantages, with the ability to bin takes and produce preview edits, but it seems like overkill for an on-set recorder. AJA give you a lot more options with regard to recording format – many different types of AVI, stills sequences including DPX, Quicktime and others. I quite like the Blackmagic approach of recording an AVI then mounting it as a virtual drive, though – this is the company’s way of providing access to the frames as DPX files, and it’s convenient to manage takes as single units, then provide frame by frame access as required.

But this isn’t a competition, particularly as both units are grotesquely overspecified for use as nothing more than a recorder, and I’m sure I haven’t used even half their features. This isn’t a review of either of them, other than for the specific task of on-set recording, and neither of them were really designed to do this sort of work. The AJA board is more expensive, and is missing the DVI and analogue audio connectivity. It is worth mentioning, though, that Thad Huston at AJA provided absolutely superb support to get the demo board working, and if they’re as responsive to customers as they are to reviewers, it’s a very good sign.

Storage

The second requirement, and the one to which attention naturally turns, is that of storage. Seagate supplied us with some of its most recent Barracuda 7200.10 hard disks, which are among the first to use a technology known as ‘perpendicular recording’ (laying zones of magnetism on edge rather than end to end), which results in much higher density storage. These drives are 750Gb apiece, which is positively spacious, even for uncompressed HD, and the higher density results, given that they spin at the same speed as any other hard disk, in very healthy transfer rates. Individual drives have been measured at well over 50Mb/s, sustained for both read and write, which bodes well.

All this said, there is an intrinsic reliability issue with high areal density – that is, having lots of data packed into a small area. As hard disks wear, the spindle bearings slacken, and the stability of the track beneath the head begins to degrade. Eventually, when the instability exceeds the ability of the head servos to compensate, the disk becomes unusable. The smaller the individual areas which represent the ones and zeros of binary data, the more of a problem this is, and these Seagate drives have among the smallest ones and zeros of any hard disk ever made.

A way to mitigate the reliability of these disks, especially in this application, which will need several working together, is redundancy. No, not sacking people – keeping the data twice, so if we lose some of it, we can get it back. The term RAID originally stood for Redundant Array of Inexpensive Devices, the idea being that if one disk failed, the data had already been backed up on another. There are various modes in which this can work, including simply telling the disk controller to treat two disks as one, so if one fails, the other remains readable, a situation referred to as RAID-1 and generally termed ‘mirroring’. The result is an array of disks with, theoretically, twice the mean time between failure, but double the cost per unit of storage. Since the inception of RAID, the idea of using drives in parallel to increase throughput has also been implemented. This is RAID-0, and it is referred to as ‘striping’. The performance increase is more or less linear with the number of drives applied, but with, say, five disks, the failure rate is one fifth what it was before – there’s nothing redundant about this technique.

The most commonly used system for this sort of application, RAID-5, is more advanced, using an algorithm on an array of at least three and usually no more than six disks, such that if any one of them fails, the array can be rebuilt from the others. The performance increase is not as marked as with RAID-0, but the backup factor is hard to turn down and the cost increase (and storage loss) is at most 33 per cent and possibly as little as 16.5 per cent, where one disk’s worth of space is lost to redundancy in a six-disk array.

At least one currently available field disk recorder uses six disks in a RAID-0 array. It’s an attractive engineering solution, since it minimises the number of disks required to achieve a given data rate. However, most computer systems engineers would be utterly horrified at the idea of recording such expensive data as a film shoot on a device which is one-sixth as reliable as a single hard disk.

RAID redundancy is not a panacea. It does not protect against silly users accidentally deleting all their data, because the disks will faithfully maintain any changes you make. More to the point, the principal enemies of hard disks are shock, heat and wear, and those factors will apply equally to all the disks in a single physical cabinet such as is used to house a RAID array. The likelihood therefore is that when one drive fails, others may well be close behind it – a phenomenon known as the bathtub failure curve for the alarmingly steep slope at the end. But it’s certainly a better idea than no redundancy at all.

Discussion of reliability aside, let’s look at what we’re asking for in the worst (or, perhaps, best) possible case:

• 10bit dual-link 1080p;
• 1920x1080x30x24bits/s.

That rather intimidating string of numbers represents the number of pixels in a 1080p HD frame – 1920x1080 – multiplied by the number of bits per pixel, 10 per channel in three RGB channels, multiplied by the frame rate. That gives us a rather large number of bits per second, so let’s divide by the number of bits in a byte, then by the number of bytes in a kilobyte, then by the number of kilobytes in a megabyte to give us a number we’re used to looking at:

1492992000/8/1024/1024=177.98Mb/s

Now, 177.98Mb/s is a rather large number, and that’s overlooking overheads due to file formatting, fragmentation and other problems, so 230Mb+ is probably a good number to aim for.

Given that we don’t want to be at the mercy of a single disk failing, the obvious thing to do is go for a RAID-5 array – minimal loss of storage space to redundancy, but any one disk can fail.

Unfortunately, there’s currently a compatibility issue between the nVidia RAID controller I used and the large Seagate hard disks, and I haven’t been able to do that yet – I’ve been stuck with a hybrid arrangement using a little-used subfeature of Windows disk management. Foxconn, which makes the board, claims nVidia is champing at the bit to solve this problem, as they’ve clearly realised the ludicrousness of a RAID controller that doesn’t understand large drives, and I’m promised a fix for this. As it is, I’m sacrificing half the hard disk space to redundancy, but it still has over two hours of recording space and the cost-benefit equation works out favourably.

So we have a disk array that’s fast enough and a card to capture the data. The intermediary is in many ways just a PC, but it has to be able to get data from A to B in a timely manner. I chose the Foxconn C51XEM2AA motherboard, which is targeted at high-end games players. It’s worth remembering, though, that high-end games players routinely watch realtime rendered 4:4:4 high definition video at resolutions and frame rates much higher than what we’re talking about, and while it’s a bit of a leap to connect the two, it’s a relevant comparison in terms of raw bandwidth. A nod for top notch work here goes to Carl Brunning at Foxconn, who was available on a Sunday morning to give me support – and this is not a small company making small amounts of hardware; they ship thousands of boards, so the personal touch is doubly welcome.

As I mentioned above, these systems traditionally use a pair of Xeon processors, and although Intel has been selling Xeons to the high-end computing sector for quite a while, they’ve been well developed and upgraded and are still formidable performers. They’re also bulky, or at least they are with the forced-air coolers clamped on top, and pull quite a bit of power. Given that we’re not actually trying to edit anything here, I wasn’t convinced that two physical processors were necessary, even though AJA specifies them in its system recommendations. Unless you’ve been hiding under a rock for the past year or so, you’ll be aware that there are now dual-core processors available for desktop computers, effectively (and almost literally) combining two CPU devices on one physical chip. AMD supplied us with the Athlon 64 X2 3800+, among the most capable devices the Foxconn board would support, and while it’s far from the top of the line, this should not, ideally, be a processor-limited operation.

Main memory was supplied by Corsair, although I’m not pushing memory timings as hard as I’m sure other reviewers do when writing solely about memory. nVidia get a mention here for having built the drive controller that Foxconn uses on its motherboard, which, under a stress test, maximum throughput, managed to put nearly twice the data rate we need across six of the Seagate disks. No wonder it’s got a cooling fan clamped on top of it! I threw a basic, older PCI graphics card onto the board to provide for a basic display, without stressing other aspects of the system too much, installed Windows XP, and held my breath.

This system really shouldn’t work. It doesn’t conform to any of the system requirements given by either Blackmagic or AJA. Experienced professionals, and my own nagging doubts, told me that under no circumstances would the disk controllers be fast enough, or the PCIe bus support, such an outlandish selection of hardware.

So does it work? Well, yes, actually, it does. It works with room to spare. Just prior to going to press I managed to test it with the JVC HD251 and the Canon XH G1 with which to continue the tests. Hopefully, by next issue I’ll have had chance to test it with the HD251 on a live project. Then I’ll be able to bring our trial recorder into alignment with these – and perhaps a few more outlandish cameras. More on this next time…

Go back button Back Go home Home Contact Go home