Corey Robson


Nvidia GK110B guide for OSX 10.9.2


EDIT MARCH 2, 2014 - I should have mentioned I’m running CUDA 5.5.28, not the newer version. Ironically 5.5.43 has broken CUDA for some people.

BTW this has worked without a single crash since installing so I won’t be trying any newer drivers that surface until I hear of a few success stories out there with a meaningful performance increase.   


As I couldn’t find a straightforward guide for putting all the bits of info out there together I thought I’d share what I did to get a fully working GTX780 under 10.9.2. Hopefully this will help those with any of the Nvidia GK110B GPUs out there collecting dust. 

Disclaimer: Do not attempt any of the following without a BOOTABLE WORKING CLONE of your OSX drive in case you run into trouble. I’ve only tested this with my card listed below and I have no idea if it will work with your GK110B card if it is different than mine.

OpenCL and CUDA now working 100% on my hackintosh / EVGA GTX780 SC (GK110B card) with 10.9.2 released yesterday.. Here’s my card:


Here’s what I did:

1. Updated to 10.9.2 using this guide (See Stork’s post on page1 made at 2:13pm):

2. Shutdown, swapped my GTX770 to my GTX780, rebooted

3. Upon reboot I had no openCL but Cuda was still working.

4. Downloaded this 10.9.2 web driver:

5. Installed and rebooted

6. Nvidia pref pane still says osx default drivers are loaded, not web drivers. OpenCL now works though. CUDA does not. No available CUDA update from Nvidia yet. 

7. Selected web driver in preference pane, rebooted

8. Same result, same OpenCL score in Luxmark with web driver now active. At this point the GTX780 is functional with OpenCL but no CUDA so I wanted to get both working. Next up:

9. Copied all 10.9.2 Geforce kexts from  /System/Library/Extensions into a desktop folder in case things went wrong

10. Replaced my 10.9.2 GeforceGLDriver.bundle with my 10.9.1 GeforceGLDriver.bundle which I copied from my 10.9.1 backup OSX drive /System/Library/Extensions folder

11. Repaired disk permissions

12. Followed Malaxor’s advice exactly from here:,8369.15.html

- I used the 10.9.2 libclh.dylib file from my desktop Geforce kext backup folder found inside the 10.9.2 GeforceGLDriver.bundle and installed it into the 10.9.1 GeforceGLDriver.bundle now located in the /System/Library/Extensions folder. By installing it I mean paste over the existing libclh.dylib file. 

(To access the libclh.dylib, right click on the GeforceGLDriver.bundle, select “show package contents” and you will find it in the /contents/macos folder.)  

13. open Terminal and enter the following to rebuild kernel cache:

sudo touch /System/Library/Extensions

14. repair disk permissions

15. Rebooted and could not get the OSX default geforce drivers to load. See this post if you have this issue and follow the instructions from Asgorath reply#18 :,8369.15.html

- I did exactly what he suggests

16. Added nvda_drv=0 to my org.chameleon.Boot.plist just to make sure the default osx geforce graphics would load. (May not be necessary.) 

<key>Kernel Flags</key>
<string>-v darkwake=0 npci=0x2000 nvda_drv=0</string>

17. Rebooted and now have OpenCL and Cuda working in 10.9.2. 

When a new CUDA update is available I will replace the work above with the default 10.9.2 web driver files unless performance is slower. It’s not an elegant fix but it worked for me. So far have tried renders in Scratch Lab v8 beta, Resolve 10, and Adobe CC apps and all are working as expected.

A huge thanks to everyone listed in the threads linked to above for figuring this out.



Old School Digital Imaging Primer


Recently I was asked by a colleague if I had any DIT related course material to share. So, after a bit of digging around I found my beautiful old powerpoint presentation from a weekend course I gave to the Director’s Guild of Canada and I.A.T.S.E. Local 891 in 2008. I’m posting it here for your viewing pleasure in case you or someone you know is just getting their feet wet on this whole digital moving image fad. 

It was meant purely as an introduction to HD terminology and technology for Producers, Production Managers, and Production Designers as they began their transition away from film. Hoping that using their existing film based vocabulary as much as possible would help the cause, I sacrificed technical accuracy at times. For example, I used the term 24FPS throughout when in most cases 23.976FPS would have been more accurate. Stuff like that. If I wasn’t in the middle of a show I’d update it with more current camera and format references but this will have to do for now.

Just in case you didn’t catch the sarcasm earlier, it looks like shit frankly but anyway, enjoy and share as much as you like. Here it is:


If you’re craving much more in depth material on digital imaging, you won’t be disappointed with this 7 part brilliant presentation by John Galt (Head of Digital Imaging at Panavision) and Larry Thorpe (Canon optics guru) now hosted here at FreshDV:


First posted by Panavision in 2008, it is still very relevant and will need to be watched more than once to soak it all up. 

To be continued…

Canon C100 thoughts and rants:


Bought one with the $1000 rebate now being offered, love it, would do it again in a heartbeat. Why? Well since you asked it’s because I, like many others have grown to love the Canon C300 and now the C100. For both cameras the specs sounded underwhelming when they were released and still do for the price but as a daily user of Alexas and Epics I can easily say that these cameras are solid production cameras that perform much better than what the specs suggest.

EDIT: I realize I had reported 11 usable stops of DR in my previous C300 posts but for whatever reason - perhaps reduced flare from the chart as I’ve taped off half of the brightest pluges - I am now confident that there are at least 12 usable stops of DR with the C100 and C300.

EDIT#2: I’ve been using my new Atomos Samurai Blade for the past week instead of the Ninja2 - well worth the extra investment. This thing is razor sharp. No need for a 2nd monitor anymore to judge focus. You’ll need one of these though to get the C100 signal into the Samurai Blade.

EDIT#3: Aug. 29 - Added custom colour settings for the C100 and C300 here with a bunch of charts:


- 12 stops usable dynamic range, arguably 12.5 stops using an external recorder and ProresHQ (see chart photos below.) This is a ton of room to nail an exposure BTW even without a lighting package.

- sick low light performance, noticeably cleaner than Alexa or Epic at 1600ASA or higher.

- built in ND filters.

- $7000 all in for a C100, Atomos Ninja2, SSD media and SD cards. Gives you ProresHQ master media and AVCHD backup.

- spatial resolution resolves to the Nyquist limit for 1080 (can’t tell the difference between an Alexa or Canons at 1080 for sharpness.)

- great compact ergonomics, build quality and field proven reliability.

- work perfectly with all EF and EF-S lenses I’ve tried.

- smooth highlight roll off using Canon log. (Don’t use the Cinema Locked option as it introduces slight undesirable sharpening - thanks Andy Shipsides @ Abelcine for that tip!)

- small native recording file sizes for archival.

- ACES compatible.

- discreet size for documentary work and travel. (Just travelled with it in Mexico, it passed as a tourist camera in some not-so camera friendly areas.)

- flexible sound recording options included.

- C100 AVCHD cleans up pretty well using 5DtoRGB to transcode to ProresHQ but still not as clean obviously as Ninja2 ProresHQ.

- very simple workflow options.

- did I already mention reliable?


- no higher res still capture than 1080. The camera JPEG 1080 frame grabs clip at 100IRE instead of scaling all DR into 100IRE so I just make stills in post. Note to Canon - the extra sales you would gain from enabling this (especially on the C100) would outweigh the loss in DSLR sales you are worried about. I would still use my 7D for pro stills for its size, sensor resolution, and raw speed vs a C100 or C300 with 8 or 9 megapixel stills enabled.

- no 60FPS frame rate except for 720P on the C300 only. Sales would be much higher if there was at least 60FPS at 1080. Most common complaint from all discussions I’ve read. 99% of my work is at 24P so I don’t NEED this 99% of the time.

- viewfinders are crap but usable with peaking and view assist on in bright light.

- Ninja2 not any better as an external monitor for focus judgement than the camera LCD. Will upgrade to a better external recorder eventually. I still use my Marshall HDMI monitor with both cameras when possible.

- Built in NDs should be more dense as I still need to add ND to get a F2.8 day/ext at 850ISO. Still so much better than nothing though!

- no built in mic. Even a DSLR quality internal mic would be very useful when the top handle is not desired.

- can’t reprogram assignable buttons to take over the hand grip joystick role, would be nice to not always have the hand grip attached.

- can’t jam the C100 with external timecode. You can program it within a few frames though so it’s not a deal breaker - just a pain in the ass for syncing in most apps. I get it - that’s partially why it’s half the price of a C300 but anyway I still miss it.

- no HD-SDI port on the C100 - didn’t expect it for the price.

- C100 records internally AVCHD - not pro, not usable as a master broadcast format.

- 8bit output to all external ports. Really? I know it’s not noticeable most of the time but dammit sometimes it is! Note to Canon - some owners would like to shoot VFX and have smooth gradients that surely must be possible given your amazing sensor technology. Please try and enable this with a firmware upgrade for both models. We’ll pay for it if we have to!


I was aware of the shortcomings before buying and still did it. My business plan makes the C100 a no brainer for me. It will make a profit before it is obsolete and give me many hassle free days of shooting on lenses I can afford.

The image is fantastic and even feature worthy. Yes I said it. Time for some nostalgic perspective and sarcasm. I was one of the first DITs to use the Panavision Genesis (Scary Movie 4 - 2005) which recorded at 1080 and had less dynamic range than these cameras yet somehow audiences stomached 1080 on the big screen. Oh right - that’s because the movie going public doesn’t give a rat’s ass nor can tell the difference between acquisition formats most of the time. That 1080 movie and dozens like it have done just fine in theatres next to film, 2K, and 4K. This is because the total image workflow & display system MTF gives you the image’s final perceived resolution which still is rarely a true 4K projection in most urban cinemas. BTW John Galt (Panavision) and Larry Thorpe (Canon) did a thorough joint presentation a few years back on system MTF which is brilliant and can be viewed here at FreshDV. I digress. As you know if you have a bad script, awful acting or poor lighting you’re still going to have a shit movie. The public will notice that and so will your financier. Not the camera’s fault. Honestly if you can’t light creatively with a full crew and gear package within a 12 stop exposure range then something is wrong. I wouldn’t hesitate to shoot a series, commercial, or low to medium budget feature using C300’s if you can’t afford a Sony F5 package or better, provided your VFX dept. is cool with 8bit 4:2:2 ProresHQ (external recorder required) for their needs. Well worth testing for the potential savings. If yes then save your money on an Alexa or Red package and spend your savings on glass, lighting, or non-destructive workflow.

Rant continues… Unless the public has suddenly begun pixel peeping on their new 70” 4K flat screens displaying 4K lightly compressed images streamed from their affordable ultra high speed data plans into their SMPTE standard lit screening rooms then I think 1080 still has a few more years ahead of it yet. Of course there is a place for 4K and greater acquisition but this topic has been debated to death and I’m not interested in fighting people that NEED 4K or raw for that matter. Presently I don’t - if you do then I am envious. I love killer resolution and dynamic range as much as you do, trust me - but these are business decisions for me as I am not financing high budget features currently nor am I a wealthy amateur. I am though a cinematographer and DIT so if a client asks for 4K I will happily point them to a Red option that fits their budget. BTW I love working with Epics but I currently don’t have a business model to pay for a package myself. Plus I’m not interested in competing with the many excellent rental houses out there.

As for the amazing magic lantern DSLR raw video hacks as of late, congrats to the whole community on an amazing accomplishment. This will be huge for many emerging cinematographers especially as future releases mature. It’s not something I can use on a paid job or have time for on set to process properly so I am not as enthusiastic as many others out there.

The C100 and C300 are great visual storytelling tools, nothing more, nothing less. You can simply get on with it and shoot compelling moving images without much fuss or a 2nd mortgage. BTW I don’t have brand allegiance. I have a 2013 Blackmagic cinema camera preordered and have used every film and digital cinema camera system out there except Dalsa and Imax. After 20 years on set I just happen to know what tight budgets, packed schedules, ridiculous delivery deadlines, and 100 plus setups per day looks and feels like. These are very demanding times for producers and these cameras fulfill many production niches far better than you might first think as a DP or producer. Only you know what you need for your project but before dismissing a camera because of it’s price tag or specs do yourself a favour and shoot with one if you haven’t already.


Dynamic range chart (Gamma and Density Trans-1 chart) from the C100 - recorded using an Atomos Ninja2 into ProresHQ. Camera settings were canon log, 850ISO, 5600K, 17-55mm EFS zoom. Frame grabs are from my Leader 5330 waveform monitor.



imageNote: full res chart link was scaled to fit within 0-100IRE, no other adjustments made. TIFF version found here.

Hope Dr. David Suzuki doesn’t mind me using his image for this. It was a huge honour and privilege to meet him. I recently used my C100 / Ninja2 for his interview on a Canon 70-200mmIS F2.8. Canon log, 5600K, think it was 1000ISO.



Above pic#2 - quick luma curve and sat boost, no sharpening or other adjustments made.

To Mac Pro, or Not to Mac Pro, That Is the Question:


Sir Lawrence Olivier contemplating the new Mac Pro

EDIT FEB. 28, 2014 - Updated to OSX 10.9.2 a couple days ago, no issues to report. See the link below for the update guide and also to get CUDA and OpenCL working properly if you have a Nvidia GK110B GPU. To get my GTX780 up and running took a bit of work.   

EDIT FEB. 21, 2014 - Updated to Mavericks 10.9.1 following MacMan’s guide found here:

Just do exactly as he says as usual and all will be well. Everything works as it did in 10.8.5. 

For anyone using Assimilate Scratch or Scratch Lab, support has given the go ahead for Mavericks. I have been running the v8 beta for a few days and am loving it but have yet to give it a full workout.

Now for the exciting part and I quote from support - “Note that LAB V8 now uses multi-GPUs to debayer RED material.”

That is for MX media, not Dragon yet but that is in the works as well. Will have to test my GPUs vs Red Rocket one of these days. Also from Assimilate support regarding some v8 questions - “Scratch is 1 GPU only. When processing wrappers (MXF / QT), GPU does the color but encapsulation remains a CPU task” Lots of great news with the v8 release overall.

EDIT JAN. 28, 2014 - Updated with R3D playback performance below


A lot of people in our industry are wrestling with inevitable hardware purchase options given that the 2010 Mac Pro design is EOL. What makes this year’s decisions particularly difficult is that if one wants to invest in Apple to manage demanding workstation tasks and use their existing PCIe hardware, the 2013 Mac Pro requires an external Thunderbolt expansion infrastructure if it is to directly replace last year’s model.

Most concerning for me is that despite the incredibly innovative new design, Thunderbolt expansion solutions for multiple PCIe cards unfortunately mandates potentially significant PCIe lane sharing that was formerly a manageable issue for power users on the previous Mac Pro platform. Apple is apparently not concerned with this impending headache for many of us though: 


"It’s our most expandable Mac yet." Really? How so? With the 2009 / 2010 design, the 2nd PCIe x16 slot allows for an external PCIe expansion chassis to be connected to the Mac Pro with a bidirectional data pipeline of 8000MB/s including overhead. (I’m assuming you occupied the first x16 slot with your OSX GPU.) Not to be forgotten are the two PCIe 2.0 x4 slots offering 2000MB/s bandwidth each. That’s 24 PCIe lanes total available for expansion.

With the 2013 model, this is what you get for PCIe card expansion: 


12 PCIe lanes total. Each one of the 3 Mac Pro Thunderbolt 2 Falcon Ridge controllers connects to the PCH / CPU via a Gen2 PCIe x4 bus, period. In theory, the fastest you can expect out of each TB2 controller or any single TB2 port will be approximately 2000MB/s given an approximate 20% overhead penalty. (20Gbps = 2500MB/s.) Old Mac Pro fastest expansion option - 8GB/s vs new Mac Pro fastest expansion option - 2GB/s. Bummer.


Or is it? To be fair, 2GB/s per controller is fast enough for my needs most of the time. My 24TB RAID10 (8 x Hitachi Ultrastar 3TB SAS drives - 12TB usable) and Atto R680 would easily run full speed on one TB2 controller provided nothing else is connected. I say that because unless TB2 has reduced latency over TB1, I have found that IOPS / read / write speeds take a nosedive when a TB controller has to manage more than 1 working device at once. Multiplexing multiple devices’ data into one PCIe x4 bus takes time and resources that do affect individual device performance. With 2 controllers remaining, the 2nd Falcon Ridge controller would be saturated with my Red Rocket as x8 Gen1 = x4 Gen2 bandwidth. That leaves 1 TB2 controller to take care of my Atto H608 HBA card for LTO-5 and eSATA shuttle drives, Sonnet Qio E3 for SxS cards, Myricom 10GbE fibre card (for Phantom docks), Decklink 3D Extreme SDI card for Resolve, Aja Kona 3G for Scratch Lab, FW800 PCIe card, and Highpoint Rocketraid 2314 for Redmag readers.

Obviously it’s unlikely all of those are goIng to run at full speed sharing the same TB2 controller but in reality I probably wouldn’t configure the cards that way anyway. I’d test thoroughly to find out the best way to share the available bandwidth between the 3 controllers. Until then I can’t say what the actual speed penalty would be using Thunderbolt 2 expansion vs. x16 PCIe expansion, but what I do know from experience is when you’ve got a half million 5K frames in the render queue and multiple delivery formats to churn out ASAP, every little bit of performance helps. 

Then there’s durability. PCIe cards are fragile unquestionably but mounted properly inside a PCIe slot, there’s really no reason for them to get physically damaged except maybe from electrical issues. All I know is that it’s less mess and more robust for me to take a mac pro or hack to set than it is to take a thunderbolt Mac and multiple expansion chassis plus additional thunderbolt peripherals. By the time you get all the AC cables plugged in and all the thunderbolt daisy chain cables connected, you’ve got many potentially disastrous connections in that scenario. With a tower, all of the PCIe peripherals are bus or PSU powered and there is only 1 AC cable to worry about if you’re not running a PCIe expansion chassis. If you are you then have 2 more cables to worry about. No big deal. Plus the footprint is actually smaller with a tower and internal peripherals than an equivalently equipped Thunderbolt setup. 

There is one last reason I’m not a huge TB fan. I have experienced rare but painful TB peripheral crashes especially when daisy chaining devices. A restart is usually required. I don’t have this problem on Mac Pro or hackintosh setups with the same cards. Not much more to say about that. I get it though, not many of us need this much expansion bandwidth and Apple will undoubtedly expand it’s Mac Pro market by making it this small and cool looking. Too bad their wasn’t yet another version that would cater to Apple’s top end power users made by Apple. 


Assuming that your expansion needs will be met with Thunderbolt 2 and you’re ready to pull out your wallet, you might want to take a look at the new Mac Pro graphics vs. other options one last time before committing. 

Truly the 2013 MP is well equipped in the graphics department with dual AMD D700s available but it’s still frustrating being unable to take advantage of additional PCIe GPUs for applications like Davinci Resolve and Adobe Creative Cloud. CUDA acceleration is not supported on the available Mac Pro graphics options but that will become less of a concern as the months roll by and OpenCL becomes better utilized in pro media apps.

Still, as good as the dual FirePro GPUs are, they aren’t quite as impressive as dual Titans or GTX780s, at least on paper. For example, the EVGA ACX GTX780 Superclocked Edition tops out at 4.3 teraflops vs. the mac pro D700 at 3.5 teraflops in single precision performance. Here’s a few more specs:

NOTE: I am not 100% sure of their compatibility with Mavericks as I am still running 10.8.5 on all of my Macs.

EVGA GTX780 ACX Superclocked Edition:


NOTE: GTX 780 ACX SC OpenCL not working on 10.8.5, OpenGL / Cuda is working properly..

GTX 780 ACX SC Cuda-Z score:

(Sorry about the lame pics - I did this in a hurry…)






NOTE: I have not personally tested the Titan but rumour is some people have both OpenCL and OpenGL working properly in Mavericks.

ASUS GeForce GTX 770 DirectCU II OC:


Graphics Engine: NVIDIA GeForce GTX 770
Bus Standard: PCI Express 3.0
Video Memory: GDDR5 2GB
CUDA Cores / Shading Units: 1536
Memory bandwidth: 224GB/s
TMUs: 128
ROPs: 32
SMX Count: 8
Pixel Rate: 33.9 GPixel/s
Texture Rate: 135 GTexel/s
Floating-point performance: 3.25 TFLOPS

Two of these should perform similarly to the dual D500 Mac Pro option considering the nearly identical specs.

NOTE: OpenCL and OpenGL / Cuda working 100% on this GTX 770 card running 10.8.5 and OSX default Nvidia drivers.

GTX 770 Cuda-Z score:




GTX 770 Luxmark OpenCL score:


Heaven Benchmark 4.0 - Extreme settings but Tessellation not working in 10.8.5 to my knowledge:


Added Jan. 28, 2014:

R3D playback clip: Epic 5120 x 2700 Redcode 8:1 23.976

OSX 10.8.5
CPU - i7 3970x Overclocked to 4.4GHz
RAM - G-Skill 64GB 1866MHz
GPU - 1 x Asus GTX770 2GB
Storage - 12TB RAID10 - ATTO R608 & Hitachi Ultrastar 3TB SAS x8

Redcine-X Pro version 22.1.31200
1/2 debayer - 23.976 FPS - no dropped frames
Full debayer - drops frames - stalls briefly every 65 frames

Davinci Resolve 10.0.2 (switching between the OpenCL / OpenGL GPU setting didn’t change results)
1/4 good - 23.976 FPS
1/2 good - 21 FPS
1/2 premium - 10 FPS
Full premium - 5 FPS

2013 Mac Pro GPU options:



Click on this link for a great Nvidia GPU comparison chart found at

I am damn impressed though by the D700 specs considering the tiny size of the new tower design. In real-world use there will probably be little meaningful difference between dual Titans and dual D700s according to these specs but of course this is all guessing for the time being. 

Moving Awkwardly Forward:

It was with all this in mind that I reluctantly committed to another hackintosh build. Before I get into that though I must give a huge thanks to Tonymacx86 and the amazing contributors on his website and of course Apple. I mention Apple because so far, they have been turning a blind eye to the Hackintosh community. Without them both, hackintoshes would not be an option.

This actually wasn’t a difficult decision once I started looking into it again despite having sworn that I was done building hacks. I’ve built 6 hackintoshes since Leopard and they are unquestionably a pain in the ass to perform updates on. Much less so than in the past but still not totally stress free. Once dialed in though stability has not generally been an issue for me. I rarely feel like an OSX crash is because of my core hack components - namely CPU, RAM, motherboard and platform specific kexts. Instead, they’re almost always caused by application bugs or peripheral hardware conflicts that likely would have also caused a problem on my real Macs. I say this confidently because as I’ve mentioned I regularly share hardware between my hacks, Mac Pro, and TB Macbook Pros. They all have issues at some point. I consider my hack builds stable when I get all graphic benchmarks performing at expected or better speeds and when the system can successfully run the Prime95 Torture Test (Blend mode) for 24hours while staying under 80 degrees Celcius on all CPU cores.

In the end my gear decisions have to be based on overall value plus meet a few basic needs. The 1st need is getting the job done right as quickly as possible within my available resources. Translated this means: faster transcodes = more sleep. Simple. 2nd to that is ensuring my business plan remains intact which is definitely getting harder to do each year. That could easily be a post on it’s own though. When I look at all available options as objectively as possible I have to admit that if it became impossible to build hackintoshes tomorrow it would make more financial sense for me to migrate to Windows than to invest in the new Mac Pro system. Prores encoding is the main technical obstacle I would face in that scenario and there are now economical solutions for that such as Cinec. For me, at least for now these needs are met best using PC parts running OSX. 

One Last Hurdle:

If you are a more morally upstanding person than myself or part of a large organization there is at least one additional consideration you will be faced with if considering a hackintosh. That is, the legal and moral implication of installing OSX on PC hardware. No matter how you slice it, it is a violation of  Apple’s OSX EULA to install OSX on non-Apple hardware. This is a huge debate and has been discussed many times in many languages but If you’re a large company, the Apple OSX EULA violation makes Hackintoshes a no go. No project manager or legal department would likely allow it. For small to medium sized businesses particularly in the VFX / edit house world, hackintoshes are not uncommon but generally kept out of the client’s line of sight. Although extremely unlikely, hopefully Apple will someday open up the OSX EULA to accommodate hackintosh installations from a legal standpoint.  

The Build: 


Above: OSX reads 4.3GHz but I it is actually running at 4.4GHz. Haven’t looked into a fix yet.

Here is a list of my hackintosh parts that should closely match the performance of the new 3GHz 8 core E5-1680 Mac Pro model with 64GB RAM:

Intel i7 3970x CPU - $700 (eBay) Overclocked to 4.4GHz 
64GB G-Skill 1866 RAM (2 x F3-1866C9Q-32GXM) - $660
Gigabyte X79S-UP5 Wifi - $355
Asus GTX770 OC GTX770-DC2OC-2GD5- $370
(my EVGA GTX780 ACX SC is collecting dust until the 10.8.5 GK110B OpenCL issue is resolved)
Silverstone 1200W PSU - $233
Intel RTS2011LC CPU cooler - $55
Coolermaster Elite 330 ATX case - $55 (already owned)
2 x Cooler Master Blade Master 120mm fan - $30 (already owned)
OSX drive - 1TB Hitachi Deskstar 7200RPM - $100 (already owned)
OSX Mountain Lion $20
Hackintosh install guides - - by donation
Total - $2558

As an aside I am not installing OSX Mavericks until I get the go ahead from Assimilate. I need Scratch Lab to be as stable as possible which means sticking with 10.8.5 for now.

The performance estimate I’m using as a reference for the 2013 Mac Pros can be found here on the Geekbench website.

My Geekbench 3 64bit score @ 4.4GHz 24/7 stable:



2013 Mac Pro Geekbench 64bit estimates:


There is a new Geekbench 64bit score for the new 12 core Mac Pro: 33066

For now I’m satisfied getting close to the 8 core model performance-wise relative to the cost. My 3970x build is about 30%-50% faster than my workhorse i7 970 / X58A-UD5 / 24GB RAM OSX system with a GTX660 for what it’s worth.


Lastly, here are some estimated performance / price comparisons for you to digest:

2013 8 core Mac Pro - $7700 - 5% faster than my build - $5142 more than my build

2013 12 core Mac Pro - $9700 - 30% faster than my build - $7142 more than my build

At least you get a significant speed bump for an extra $7000…

Anyway, I don’t know how many of you are in the same boat as I am but I hope this helps you find the right solution for your workstation needs whatever they may be. 

BMPCC vs. Man with Charts


EDIT: Feb. 6, 2014 - Added 2 more CinemaDNG raw exposures / frames from the BMPCC below. The new pics are way overexposed and are for anyone wanting to see clipped RAW sensor data on this DR chart. The Davinci Resolve screen grabs should answer any setting related questions. 

EDIT: Nov. 27, 2013 - 180 degree shutter problem fixed! Install the 1.5.1 firmware update from Blackmagic Design and all will be well with your BMPCC. Now if only we could get Prores LT on the next update…

Here’s a couple test charts shot with my Blackmagic Pocket Cinema Camera. But before you get your hopes up uber-nerds, this won’t be a full-blown, every last ergonomic detail raked over with a fine tooth comb, kind of review. There are plenty of other places to go for that. I can however show you how the camera performs as far as dynamic range and spatial resolution and then bitch about it all for a while.

Onto the tests. I installed v1.5 of the camera software for this review and I can tell you that I have seen zero instances of the highlight orb phenonmenom. Hope that’s been sorted out for everyone now. Unfortunately though the 180 degree shutter problem has not been fixed with this latest software update. For those of you unaware of this problem, when the shutter is set to 180 degrees at 23.98 (only framerate I have tested) the shutter is effectively off (360 degrees) so your image will have twice the motion blur it should have. I have been shooting at 172.8 degrees (closest option available to 180 degrees) to get around this for now as it actually looks like a correct 172.8 degree amount of motion blur. Hope this gets addressed soon.


The Panasonic 12-35mm F2.8 zoom I picked up is a great companion for this camera. It’s small, lightweight, sharp, has optical image stabilization and covers a nice range. No complaints about this lens considering it’s intended purpose. It’s the lens I used for this review and all my BMPCC shooting so far. Still waiting for Metabones to release their EF to MFT Speedbooster which should do amazing things for this camera.

Having shot with this camera daily for the last 10 days has convinced me that it will remain as part of my toolkit. I wasn’t sure if I was going to keep it but it’s compact size and great image quality make it more than adequate for extra angles during stunts or other inconspicuous situations when bigger cameras just won’t work. For documentaries the BMPCC is even less in your face than a stripped down C100 and will be widely used in that arena in the near future I’m sure.

At it’s maximum sensitivity of 1600ASA the noise is acceptable IMO and I can live with the greater depth of field from the S16 sized sensor. 800ASA has a quiet and organic looking noise / grain signature.

With a 0.5x multiplication factor for BMPCC lens focal lengths to get an equivalent field of view to S35 sensors, the depth of field is very close to how a S35 sensor looks 2 stops more closed down than any given BMPCC lens iris setting. For example, a 40mm S35 lens set to F5.6 looks very similar to a 20mm BMPCC lens set to F2.8. Be sure to check out this fantastic FOV calculator on Abelcine’s site when you get a chance.

As for dynamic range, saying the BMPCC is a 13 stop camera is a bit of a stretch. I’m seeing 12 stops at 800ASA, and 12.5-13 stops at 1600ASA depending on what you consider useable. I’m going to say it has a bit less usable DR than the Canon C300 / C100 due to a noisier image particularly at 1600ASA which makes shadow detail less recoverable on the BMPCC. The Canons measure 12-12.5 stops at most ISO settings over 800ISO using an external ProresHQ recorder. (Canon uses ISO, Blackmagic uses ASA, Arri uses EI, whatever…) The Arri Alexa has over 14 stops usable according to my chart for comparison. Amazing performance on the BMPCC for a $1k camera body though, that’s still a ton of dynamic range.

In RAW mode, the sensor’s DR appears to be exactly the same as is visible using Prores / film mode, but the payoff obviously is you will be able to adjust the image further than in Prores mode before bad things happen. I used Davinci Resolve 10 to export the stills from the original camera CinemaDNG file in case you’re wondering. I won’t be shooting RAW anytime soon as it’s not worth the minimal benefit I am seeing.

Same story in the spatial resolution camp. For this test I used my 4K Mega Trumpet chart. The BMPCC cannot resolve as much detail as the Arri or Canons but it’s still very respectable. Minimal, if any, increased sharpness in RAW mode compared to Prores mode. The lens was set close to 30mm at F5.6 and appears to out resolve the sensor nicely. In either mode judging strictly on sharpness it would be hard to tell the Alexa, Canons, and BMPCC apart despite the generous amount of chroma moire visible on the BMPCC 4K res chart image below. It’s the worst of the 3 cameras in this regard but I have no doubt that it can be intercut with them as long as the DOF was kept to an absolute minimum.

ASA changes on the BMPCC affect the recorded data in the exact same way as found with the Alexa in log-C / Prores mode. At 1600ASA the entire dynamic range is recorded across the greatest range of available bits and at 200ASA it is the opposite. I’ll use 400ASA sparingly and will leave 200ASA alone unless I’m in some sort of low ASA emergency.

So here’s the charts. They are all 100% ungraded:


Above: BMPCC - 800ASA, Film DR, ProresHQ, 5600K


Above: BMPCC - 800ASA, RAW CinemaDNG

Below are Leader 5330 waveform images of the above chart at various settings:


Above: BMPCC - 200ASA, Film DR, ProresHQ, 5600K


Above: BMPCC - 400ASA, Film DR, ProresHQ, 5600K


Above: BMPCC - 800ASA, Film DR, ProresHQ, 5600K


Above: BMPCC - 1600ASA, Film DR, ProresHQ, 5600K


Above: BMPCC - 800ASA, RAW CinemaDNG


Above: Canon C100 ProresHQ HDMI capture - 850ISO, Canon Log gamma, 5600K


Above: Alexa - 800EI, Log-C, SUP-6, 5600K

EDIT: Feb. 6 2014 - New CinemaDNG pics from overexposed chart:

Overexposed #1:

Overexposed #2:

Resolution charts below:


Above: BMPCC - 800ASA, Film DR, ProresHQ, 5600K


Above: BMPCC - 800ASA, RAW CinemaDNG


Above: Canon C100 ProresHQ HDMI capture - 850ISO, Canon Log gamma, 5600K


Above: Alexa - 800EI, Log-C, SUP-6, 5600K

Feel free to download these stills and drop them on a NLE sequence to see just how close they all are to one another. For $1000 you can’t go wrong having this camera in your bag or uh, pocket. Had to say it.

For more on the Alexa and Canon Cinema EOS dynamic range and resolution see these posts:

Alexa vs. Cinema EOS - 4K Mega Trumpet resolution test

On "Arrow" we often shoot with our Alexas and Canons simultaneously. The most noticeable challenge with this camera mix is their inherent colour differences which I covered in this exciting post.

So if you’re looking for yet another great party conversation starter, you’re in luck. I shot a test comparing the spatial resolution of an Alexa, PL mount Canon C300 and EF mount Canon C100 . These cameras appear indistinguishable live in terms of spatial resolution but I was curious to know how they technically stack up using a lifeless chart. 

Thus, I used my 4K Mega Trumpet resolution chart from DSC Labs as the target image. The Alexa was set to Log-C, Prores4444 at 1080, 23.98, 800EI. The HDMI port on the C100 (23.98, 850ISO, Canon Log gamma) was connected first to an Atomos H2S with 3:2 pulldown engaged and the subsequent 23.98 signal was then recorded by a Samurai Blade into ProresHQ. For the C300 (23.98, 850ISO, Canon Log gamma) I used the internal MPEG-2 compression at 50Mbps and for fun later converted the .MXF file into ProresHQ using Davinci Resolve to see if any significant high frequency detail gets lost during transcoding. 

I didn’t discover anything too unexpected but it is pretty amazing to see a $7.5k camera (C100 with necessary extras) and $1k lens stand up this well spatially-speaking to an $80k camera and $25k lens. Dynamic range is another story which I detailed here. As you can see below the Canons exhibit more moire and aliasing at certain angles mostly because of a less aggressive OLP filter than the Alexa. That being said this issue usually only rears it’s head on set when shooting monitors at certain distances and indeed the Alexa does a nicer job than the Canons in that case. By softening the Canons with a diffusion filter this becomes less apparent and they become even closer to matching the Alexa overall. The jury is still out but I’m loving the Schneider 1/4 and 1/2 Hollywood Blackmagic diffusion particularly since my filter tests on the Canons. On that note I will be doing a separate post on the dynamic range effects of various diffusion filters on the Canons that will be up in a few weeks.

Despite having the sharpening set to -10 in the matrix / colour profiles on the Canons there appears to be some minor edge enhancement happening unfortunately. This combined with the effect from the OLPF actually gives the Canons a sharper looking image overall compared to the Alexa. It is not real detail though past the 1000 LPH mark at best. All this test has reiterated to me is that a TV audience at home would never be able to pick these cameras apart in a million years in terms of sharpness (assuming you have descent glass up front and a proper non-destructive workflow.) 

Below are frame grabs from the test. They are self explanatory I think. I shot under flourescents so the colour is not as neutral as I would have liked in the ungraded shots. I did a minor grading pass to neutralize colour and increase the contrast for easier comparison. Both versions are available to you to download at the end of this post if you wish. 






Above: Ungraded frame grabs

Below: Graded frame grabs






Note the above stills are all JPEGs created from these TIFFs:

I would have loved to do this test using the Alexa, C500, F5, F55 and Dragon Epic in their respective full res / raw modes but this is what I have available to me at this moment. My Blackmagic Pocket Cinema camera still hasn’t arrived so I can’t show you that yet either. As I get new bodies I’ll chart the hell out of them for those of you that are into that sort of thing. 

Arri Alexa and Canon Cinema EOS dynamic range revisited

We all know about the dynamic range diagrams from Arri and Canon for the Alexa and C500 / C300 / C100:


Above: Arri Alexa 


Above: Canon Log - C500 / C300 / C100

The thing is, the loss of DR at various exposure indexes I’ve been seeing live hasn’t really felt as bad as the charts make it out to be, at least with the Alexa. So, I shot a test in an attempt to get an alternative perspective on what happens at various EI settings.

To reduce flare as much as possible for clear dynamic range readings, I flagged off as much of my backlit Gamma and Density 16 stop trans-1 DR chart as possible. All Alexa measurements were taken from the SUP6 Log-C live rec out set to legal and captured into ProresHQ from my Aja Kona 3G. The C100 measurements were taken from the live HDMI out connected to an Atomos Connect H2S converter (3:2 pulldown enabled) which was then captured into 23.98p ProresHQ using my Kona 3G. For the C100 I used a Canon 100mm Macro 2.8 L lens and set the gamma to canon-log except for the Wide-DR clip below. The Alexa used an Angenieux 45-120mm T2.8 zoom. Both cameras were set to 5600K, 180degree shutter. On “Arrow” I don’t have access to raw enabling hardware for the Alexa or C500 so this test only applies to the log output options for these cameras.

Here’s what happened, first the Alexa:


Above: Alexa 400EI


Above: Alexa 800EI


Above: Alexa 1600EI

Not surprisingly, I’m seeing the entire dynamic range at all EI settings however what is worth noting is that the entire DR is represented by fewer bits at 640EI and lower compared to 800EI or higher. This will of course limit how much you can adjust the image in post before banding rears it’s head. You’ll have to decide for yourself if this is a big deal or not. For me it’s totally acceptable to shoot at 640, 500 or 400EI as it’s only a minor primary adjustment in post (or live with a LUT processor) to stretch the image out to the same healthy data levels as 800EI. I’ll stick to ND’s though for 320EI or lower as it starts to look a bit too compressed for my comfort level. As for 1000EI or higher I don’t have any concerns other than image noise as you increase the EI value. As you can see in the 1600EI still above the DR is spread out across a very large range of available bits. Routinely we use up to 2560EI on “Arrow” and not only for added stop on the lens. Creatively the 2560EI grain has it’s own signature and compliments certain sequences very well.

As for the Canons:


Above: C100 400ISO


Above: C100 640ISO


Above: C100 800ISO


Above: C100 850ISO


Above: C100 1600ISO


Above: C100 850ISO WideDR gamma setting

Thought I’d throw in a WideDR gamma pic. There’s really no reason to avoid using this setting from a DR perspective other than more limited adjustment room during grading compared to canon log. If you are in a big hurry on the post side of things this is a great setting to start out with. 

Because canon log can only be recorded and output with 8bit precision on the C100 and C300 (10bit on the C500) the smaller distribution of image data at sub-800ISO values becomes a much bigger concern. I’m not going below 850ISO on the Canons. Not much more to say on that. At 850ISO and higher though I’m in heaven. I’ll shoot the Canons up to 16000ISO without hesitation. One of my favorite setups is our PL mount C300 with a 35mm Cooke S5 set to T1.4 at 16000ISO. It’s unbelievable at night. 

None of what I’ve covered here is groundbreaking as there are many DP’s shooting Alexas and Canons this way already. For myself though, previously having only one technical reference from Arri and Canon as a starting point for choosing which ASA to shoot with had always made me a little uneasy. I should have done this a long time ago but better late than never I guess. At least now I know where I stand from a pre-compression data perspective and yes, I realize that sounds completely ridiculous. 

Canon Picture Profile Workflow Update

EDIT: Updated with chart pics below - reposting this a 2nd time, hope it doesn’t disappear again…

It’s been a long time since my last post but I just wanted to share that I am no longer using RobsonDRMax Canon picture profile in my 7D. As I mentioned in my original comparison between my profile and Technicolor’s Cinestyle, they capture the same amount of dynamic range so there’s no gain there. In the end, it’s just faster for me to work with Cinestyle than my own profile mainly because my favorite piece of time saving software 5DtoRGB Batch has added Cinestyle LUT support as part of the transcoding options. If I’m in a rush I choose to convert to Prores using the Cinestyle option and if not then I transcode as is to Prores and grade the clips more carefully.

The other change I’ve made is I’ve started using Highlight Tone Priority for pretty much everything. I don’t know with what firmware update the vertical banding problem was fixed in video mode but it’s no longer visible at all. As well, I don’t feel the gain-like effects HTP has caused other photographers in video mode. It effectively adds 1 stop of badly needed dynamic range to my shots. It may be because I am using the brilliant VAF-7D filter for all video I shoot with the 7D but I haven’t checked with it removed to see if it’s related or not. Also I am only using 5DtoRGB transcoded Prores files (which upsamples chroma) not the camera masters. I’ll post new chart pics soon (they’re on my mac pro which I don’t have with me currently) but until then all I can say is there are 12 visible stops with HTP on using Cinestyle and the latest 7D firmware. 11 stops are usable, maybe all 12 for landscape grading at 320ISO. I get nearly the same DR reading shooting my Gamma and Density backlit DR chart using canon log on our C300. Not surprising though, even with sharpening applied to the 7D clips in Premiere or Resolve it still can’t match the C300 for sharpness, color rendering and low light sensitivity but it is closer than you might think at 320 or 640ISO.

On Arrow we use 3 Alexas, a C300, 7Ds, and occasionally black edition GoPros during the multiple stunt sequences each episode. They cut together well (with careful grading) which is my point. Except on rare occasions I use the workflow outlined above for the 7D shots before I send them in to post.

Please test for HTP banding yourself before shooting anything important though, you may not get the same results. I only shoot at 320, 640, or 1250 ISO BTW. So for now my 7D is still alive and kicking, at least until the 7D MarkII is released later this year. 

EDIT: Pics of chart added:



1. Above: 7D 320ISO 5600K HTP ON, Cinestyle profile, transcoded to Prores using 5DtoRGB with no post processing



2. Above: 7D 320ISO 5600K HTP OFF, Cinestyle profile, transcoded to Prores using 5DtoRGB with no post processing



3. Above: 7D 320ISO 5600K HTP ON, Cinestyle profile, transcoded to Prores using 5DtoRGB with Cinestyle LUT selected during transcode



4. Above: C300 850ISO 5600K Canon Log, camera master .MXF

Links for full res stills:

1. Chart Waveform

2. Chart Waveform

3. Chart Waveform

4. Chart Waveform

1 of 4
Load More Posts
Sorry, No More Posts
tumblr hit counter