Beginning Coding Tips

Since I’ve been involved in coaching at BCIT, I’ve started to get a better insight into the learning processes and tools required to get a grasp of ‘programming stuff’.

Programming isn’t easy by any stretch, so I may not be able to nail exactly what it is that may be that single ‘aha’ moment of clarity that makes programming all easy from that point on, or if there are required to be many of these along the road. Or if there is a road, or a meandering goat path, or a highway.

I’ll begin with the basics you have to understand. I won’t rehash a lot of what should be covered, just various observations and clarifications.



OK, so maybe not that interesting, but the whole concept of variables, their being at the very simplistic level a box that holds something in memory.

Understand naming conventions – does your language care about upper and lower case when naming variables? Javascript does, and is probably the cause of the greatest frustration with all levels of coders. Just be aware that if something isn’t working, this could be the simple reason.

Variable Types

Variables have a type associated with them — this is a bit of a peek behind the curtain into how they might be stored internally. Usually a variable will either be a number, a string, or an object.

A number is basically something you can conceptually punch into a calculator  to do some math with and it’ll make sense. Historically, computers have liked to store numbers in specialized formats so you may find in other languages different types that represent integers (whole numbers only) and floating point (all numbers, including fractions) and your having to choose the right type for what you’re doing.

A string is literally just text, aka a string of characters you can type on your keyboard (and many more you can’t easily type) – alphabet, numbers, emoji, characters from the world’s languages, etc. Strings are quoted with single or double quotes.

An object variable is, well, an object (see objects later).

Automatic Type Conversion

Types can be converted from one to another if it makes sense – the string “123.45” can be converted into the number 123.45. But the string “A123.45” can’t be converted.

Javascript attempts to do automatic type conversion when it makes sense; but beware of the rules; i.e. the following weirdness:

"3" * "2"    // this returns the number 6
"5" + 1      // returns the string "51"
"5" - 1      // returns number 4

You can see that it’s beneficial to explicitly tell Javascript what you want done instead of relying on the automatic type conversion, which you can do by calling the Number() and String() functions:

Number("3") * Number("2")   // returns 6
"5" + String(1)             // returns "51"
Number("5") - 1             // returns 4

Variable Scope

Understanding when a variable is ‘alive’ and valid is a huge concept. Most current languages allow the same variable name to be declared again within a different lexical scope; i.e. inside a function as well as in the surrounding code. These are logically two different variables, so do not get confused.

In this Javascript example, the variable ‘a’ is declared twice; within the function, and outside the function. The inner ‘var’ makes a whole new local variable ‘a’ that is independent of the outer ‘a’ which has a global scope.

var a=3;      // global

function f()
  var a = 5;   // independent (local) variable

function g()
    // in here, you can access global 'a'

Whereas in the following example, the inner ‘a’ actually refers to the outer ‘a’, which may or may not be intentional.

var a=3;      // global

function f()
  a = 5;      // affects the global a

Quirks abound in Javascript due to its automatic variable declaration:

function f()
  a = 5;      // creates a global variable a !!!

Having functions ‘reach outside’ their scope is usually a bad practice and can lead to unintentional side-effects.

Best Practice:  Always declare variables with ‘var’ instead of allowing Javascript to figure out if you’re using a new variable or an existing one.

PHP Note: Of course, PHP has to different. PHP scopes variables only to the scope they are declared in. This is a very frequent pitfall for PHP programmers used to other languages. Global variables look like they’re declared, yet they’re unassigned when you try to use them.

Access to a global variable in an outer scope has to be explicitly declared. Why? probably to protect you from accidentally changing a global variable. So to access a global from a function, you need the global keyword like so:

$g = 5     // a global variable
function addG($num)
  global $g;    // have to have 'global' here..
  return $g + $num;


Understanding that functions are ideally written without any reliance on external knowledge of who’s calling the function. In standard terms, this is to reduce side-effects as noted above.

Functions written this way can also be modular; think about giving someone else this function to use in their code – will it be possible, or will you have to do something to their code or yours in order to make it work properly?

Parameter Passing

Understanding that parameters in a function are placeholders within that function and allow you to specify functionality without needing to know anything about who’s calling the function.

This again is an example of scope – function parameters are simply just scoped to the function.


Understanding the concept of objects is essential in working with the HTML DOM and pretty much needed to anything useful such as storing/retrieving data. So much functionality is wrapped inside objects, or arrays of objects, and so forth.

Understand that just about everything has been ‘object-ified’ — it’s one way of looking at the world in programming terms of objects, functions, fields, and events.

Arrays / Collections

Objects become more useful when you can have more than one of them (money, for example, or students, or houses).

Understand moving through collections of objects one at a time, or directly through key values. You’ll do this often; i.e. grab a list of student objects and display all names on the screen.

More advanced concepts

Dig into HTTP

Understand a little bit about the mechanics behind web traffic – GETs, POSTs, cookies, and so on, so that you know that a server is involved somewhere – not everything useful executes purely in the browser.

This positions you well to use the raft of services available through REST web services, etc., and inevitably to try to debug some of your applications that use web services.

Object Models

An Object Model, viewed through Javascript googles is a bunch of related objects, with properties and functions to manipulate those functions. An OM tries to model something in the real or abstract world in some sort of logical way so that you can do useful things with it programmatically. An Object Model for a car might look like this:

var car = {
  make: "Mazda",
  model: "CX-5",
  radioPresets: [ "104.5", "96.1", "89.3" ],
  startTheCar: function() {},
  driveTo: function(location) {},
  pickUp: function(persons) {},
  setRadioToPreset: function(presetNo) {}

So a car object becomes a handy way to encapsulate in one handy package useful information about an instance of a car (“properties” such as make and model) and to also encapsulate the useful functions a car can perform (driveTo a location), otherwise known as “methods”.

The packaging aspect is a convenience in that you can call methods and access properties using the “.” operator; i.e.

car.make = "Volvo";
car.pickUp(["Martin", "Kathy"]);

You can hopefully envision a world where objects interact with other objects (cars interacting with road objects), objects containing other objects (cars containing arrays of people objects to represent their occupants), objects composed from other smaller objects (car containing an engine object, a radio object, 4 wheels objects), each composed of sub-objects.

Aside for sticklers: the code above is really for illustrative purposes; current Javascript provides a way of defining objects through functions and prototypes in a more object-oriented way (i.e. compared to something like Java or C#).


More on DOM stuff later; for now, understand that this is an object model (hence the “OM” in “DOM”) that represents in a tree-like form what you see in a browser. You can manipulate the objects in the tree (setting properties and calling methods) to control the visuals in the browser window.

Javascript braces hell

Start to understand Javascript’s approach of functions as parameters to functions, and the resulting nested curly braces. Format your source code properly to foster readability


Understand the concept of events as notifications that occur due to some user or system action.  Remember that everything is an object, and objects can trigger events when something happens to them – like when a page is loaded, or when an image notices that a mouse pointer has moved into its field of view.


Finally, we get to Frameworks. Frameworks are just pre-written code, not magic. The advantage is building stuff faster, likely more bug-free, and more functionality than doing it yourself. The downside is frameworks sometimes don’t do exactly what you want, and you can easily get into a kid in a candy store situation of grabbing this UI gadget that uses such-and-such framework, and another one using a different framework — with the downside of bloating your page and making it slow or buggy if the frameworks don’t play nice with each other.

But they sure save a lot of time, so just be aware and do your homework.


Reusability and Modularity

I mentioned one tip earlier – write your code in a way that someone might be able to re-use something you wrote without really needing to modify it much, or at all. A simple rule

If you have a simple function, regardless of how trivial, think about whether it could be refined in a way so that someone could either include the piece of script, or include the function (copy/paste) and use it right away. If not, then it means that there is some dependency on something that exists that shouldn’t be there — probably on a variable you have in your code.

Continuous Improvement – Refactoring

It’s rare for a beginner programmer to understand how best to break code into reusable and modular bits.

On one hand, if you don’t break down a module into potentially reusable pieces, then you end up having a huge, monolithic piece of code that looks like it has a lot of repetitive stuff in it.

At the other end of the spectrum, you might break something down into much too many small pieces that, while modular, may seem excessively fussy, and may make code readability suffer.

What to do? I typically notice code that I’ve written a few times; say,

document.getElementbyID("welcome").innerHTML = "Hi There";
document.getElementbyID("status").innerHTML = "123";
document.getElementbyID("status").innerHTML = "456";

And think…maybe I can make a function with parameters to do the same thing, and name it something more descriptive. So I refactor my code a little:

function DisplayText(element, text)
   document.getElementbyID(element).innerHTML = text;

DisplayText("welcome", "Hi There");
DisplayText("status", "123"); DisplayText("status", "456");

Better! This illustrates a simple way to cut down typing at least. And you can go one further:

function DisplayText(element, text)
   document.getElementbyID(element).innerHTML = text;

function DisplayStatus(statusText)
   DisplayText("status", statusText);
DisplayText("welcome", "Hi There");

And so forth. Is this better than just the straight document.getElementByID method? Not necessarily, as perhaps readability is impacted (someone will have to check to see what DisplayStatus() does the first time they see it).

But what happens if you want to change where the status message goes? Then you just go to one place, DisplayStatus() instead of the two (or more) places in the code you would need to if you used the document.getElementbyID() technique.

All I can really advise around modularity or creating functions is to be alert for

  1. repeated code – that may be a candidate for creating functions to do stuff and to minimize cut/paste errors
  2. useful functionality – maybe I need to put “, ” in between two strings to format a name (“Doe, John”). That might be useful elsewhere, so a function would be a great idea.
  3. centralized functionality – things I know I may want to change often or later – like DisplayStatus() above. Maybe I want to make the message appear in a red colour or different font. Making it centralized ensures that when I change the function, all possible places that I display the status also change with it.
  4. deferment – should I just toss in a function as a placeholder for now, and get on with the rest of the coding that I have in my head, and get back to that later?  I.e.  BigMathFunction(x, y, z, a, b)
  5. plain old readability. Can I just chunk up a huge function into smaller functions and call them in the same sequence; i.e. BeginningPart(), MiddlePart(), EndPart()




Framework Bloat and Missing Fundamentals

I’ve been interviewing with several large companies for a new role in software development management. I got through to the panel round of interviews with a nationwide retailer to work with their e-commerce presence.

Prior to this, I did a little homework and did what I often to do gauge the maturity level of a company’s website, looking for any obvious issues, the kind of technology used, and evidence of UX/UI consciousness.

The Issues

I found some glaring issues, such as terrible page load times stretching up to 8 seconds, which is a huge problem. In addition, page analytics from GTMetrix gave it a failing mark, as did Google’s Page Speed Insights. A product page required a whopping 180+ HTTP requests to load — a number I’ve never seen before (most sites keep it to under a third of this value).

All are red flags that indicate the need for attention – the page will take time to load, causing a potential speed penalty on Google, not to mention customers will drop off; and the extra load on servers would potentially cause scalability problems.

The interview was with my future (non-technical) boss and with several of their existing web front-end developer team members that would have been my subordinates. After the normal pleasantries, the developers proceeded to fixate on my thoughts about the latest in front-end technology. I’m repeating my thoughts in this blog post.

I stated that frameworks and technology change and by necessity, there is always a need (and desire by developers) to keep trying new frameworks, but there are some issues with the frameworks that exist today that need to be understood.

Framework Code Bloat

These issues have to do a lot with the size of frameworks/libraries. The interviewer was critical about my experience with one of the older UI frameworks that we used in my previous projects. But a framework is just a framework – a simple way to avoid the tedious Javascript programming needed to pop up a dialogue box or a panel or a lightbox – there really isn’t much magic to it. There is nothing that a framework provides that cannot be achieved by a good programmer – just with a lot more time and frustration.

The downside of frameworks is the amount of code required to include a framework, especially if no slimmed-down versions exist. These will typically include a call to the server or CDN to pull the framework javascript includes, plus a call to pull the related CSS and sprites. These calls can introduce extra delays in processing the page, increase browser memory usage. One blog article showed 90% of CSS not being used in the Bootstrap demo pages.

Single Page Applications

I went on further to emphasize that a lot of front-end design is necessitated by having to bow down to Google’s presence, and that several technologies are currently incompatible with good SEO – SPA (single page applications) built on frameworks like Angular are terrible for SEO, and not a good candidate for building out sites that benefit hugely by having their catalogue pages indexable.

I went on to say that a single page product view is enhanced by bringing in other critical information, such as in-stock information through better investment in backend and web service systems and established means such as web service and AJAX calls to bring information to the user; and that the latest UI frameworks, while fun, don’t replace the need to deliver these fundamental features, when the goal is increasing conversions.


Don’t Forget the Fundamentals

But do not forget to keep pushing the fundamentals – page speed, functionality, SEO-friendliness, user experience. Not one of these elements would have been improved by plugging in the latest JQuery/Angular/Web Component framework. What is needed in this case was a roll-up-your-sleeves focus on reducing some of the code bloat on these pages. These frameworks may speed up development, but have the downside of slowing down responsiveness for the client.

Using tools such as Google Pagespeed Insights, GTMetrix, the performance profiling developer tools in Chrome, Safari, and Firefox, and making the necessary adjustments to the HTML and server-side are key to helping provide that great first visit impression – before the user has even started interacting with the fancy libraries and frameworks – or, even more fundamentally, showing up in a search in the first place.


Regardless of the interview outcome, it was a very, very instructive process. I wish them all the best of luck, of course.

Bye, bye Nikon, hello Canon

(This was written in 2015 — things have changed since then, not for the better for Nikon).

For 25 years I was a Nikon fan, bordering on fanboy/bigot. That’s changed now, and the reasons for the change I found were both emotional and logical.

Interestingly enough, my very first SLR camera wasn’t a Nikon. It was a Pentax that I bought on a whim while on a trip to Edmonton, primarily because my wife-to-be had a Pentax camera.

Not too long after that, I was convinced to buy a Nikon F-801 by my father-in-law to be. He was also getting into Nikon after some years with Pentax (Pentax was a family thing).

Thus started a 25 year journey with Nikon. Through its tenuous days as a camera company when autofocus was invented I remained loyal, occasionally checking out the competition (Canon) but not liking what I saw despite intriguing technology. I continued to buy the top of the line pro cameras (F4s, F5) and lenses, building a system to last.

Nikon was a conservative company at the time, focused very much on engineering and lens design. Film cameras had pretty much reached their peak around the late 90’s / early 2000’s, with new designs basically being rehashes of old designs. The emergence of digital re-energized the camera market, to the relief of manufacturers.

When digital SLRs appeared, I continued to be a fan, despite some initial quality differences. I envied some of the Canon offerings at the time, which offered some fantastic high-ISO results in dim lighting. But I remained a steadfast supporter of Nikon despite Canon arguably being the go-to pro system at that time. After all, I could still use my lens collection on the new bodies. Nikon was just “better” in terms of having this spiritual connection to its users, and a heritage of engineering excellence.

When the D3 came out, Nikon started to take the crown away from Canon. Nikon users now had bragging rights with what really amounted to a save-the-business set of products (D3/D300). This ushered in a generation of excellent products that brought in unsurpassed image quality (culminating in the current D810, but really, all Nikon products using the Sony sensor technology have, in my opinion, outclassed their Canon counterparts). Canon users started switching to Nikon.

So why, when I was already an owner of a large Nikon lens system, along with the best image quality in the industry would I even contemplate switching over to start again, with less image quality?

The answer is twofold. The first would revolve around video, and my needs for a system that could support stills and video. Canon certainly had the lead there, with the 5D series and their Cinema EOS series (I own the C300 and C100). I could certainly still use my Nikon glass with adapters on the Canon bodies, and in fact older Nikon F-mount lenses are very popular due to having aperture rings, so that wouldn’t have been a substantial problem. Still, having native compatibility was a nice-to-have feature.

The second revolved around the Nikon D4. The logical progression would have me upgrade to the D4 as a mere formality. I loved its predecessor, so it would have been an utter no-brainer to move up in terms of features, resolution, and quality.

But Nikon messed it up. They introduced a new battery system and in a giant indecision introduced a dual card system with different formats (Compact Flash and XQD) instead of the previous dual CF system. This caused problems for pros like me — you could not as easily shoot with your previous camera (D3) without bringing extra crap around (another charger, another different set of batteries), and another set of cards.  Previously I had shot with a D3x and D3s side-by-side. Now I couldn’t ease into the D4 world with a D4/D3 combination. I’d have to either buy two cameras or stand pat.

The D4’s video capability, had it been decent, may have tipped me to Nikon. But it wasn’t. It was awful compared to the cheaper Canon 5DII/III. For the flagship camera to have such middling video was a disappointment, but in all fairness, Nikon is/was not a video company, and the technical challenge for what was at the time may not have been worth it.

In contrast, Canon’s 1Dx camera introduced a backward-compatible battery (LP-E4n) and charger that can power the older cameras. Perfect. No friction.  1Dx video is quite good as well.

As I’ve become more involved in customer and user experience in my day-to-day work, I’ve become more sensitive to this type of friction — making someone’s life just a little harder may not be a big deal, but small frictions add up until, like me, they move over to a competitor and nothing will bring them back.

In my case, I knew I was giving up superior image sensor quality, great support (at that time, Nikon had a service depot 10 minutes away from me with excellent service — now, alas, they have shut it down). The peer pressure (Nikon was this wonderful, underdog company, sort of the rebels against the evil Canon empire) from family and work (completely Nikon-focused) made the decision even harder to contemplate and even admit.

But, when you create enough friction, enough reasons to go elsewhere, and you create the impression that you do not understand how your customers use your products, you are vulnerable to your competition — to a new upstart, cooler rival, or even to your old nemesis. This is what ultimately happened in my decision process.

Even absolute superiority of one’s product is not enough if it is not taken in context with the rest of the product’s ecosystem. Nikon makes better images, but the inputs into the system – lenses, ergonomics, accessories, batteries, cards, support – all factor into the decision for a professional. It came down to “what system can I trust to give me the best quality image in the most adverse shooting conditions?

Recently, I feel that Nikon has gone down a perilous path that unchanged will see its demise. For example:

  • Denying issues with the D600 sensor dust (a design defect where bits of material slough off the shutter mechanism and deposit on the sensor). It took a massive effort, including China denying sales of cameras to Nikon, to finally get admission of a problem
  • D800 focus sensor alignment issues (again, not admitted until a huge internet uproar)
  • Closing of service depots
  • Lens firmware issues (several recent lenses have had recalls for compatibility firmware updates). To add insult to injury, customers have to pay for shipping to the (no longer convenient to some) service depot.
  • Overall build quality of lenses is variable compared to competition (e.g. 24-70/2.8)
  • Terrible design decision for their 70-200/2.8, a staple pro lens, to “cheat” on focal length.
  • A general sense that maybe one shouldn’t buy a Nikon product at introduction until the bugs are worked out
  • No wireless (radio) flash support. This is a great feature of the Canon 600EX-RT system I enjoy. This creates reliability in the field under tough shooting conditions versus the Nikon line-of-sight SU-800/SB9x0 system which creates a total crapshoot knowing IF the flash will fire even after the photographer has painstakingly positioned the flashes for successful firing.
  • Lack of direction or strategy for its mainstream crop-sensor (DX) products. Nikon appears to want to push users upwards to its full-frame products (where higher margins exist).
  • Lack of compelling mirrorless product (pro/prosumer level) or pro camera with at least an EVF option.
  • Underwhelming software offerings (Capture/Capture NX) that have not kept pace with the ease of smartphone transfer, or the needs of bulk processing by pros.

I feel that Nikon is not listening to its user base, is cutting costs (sometimes the cuts appear too deep, like with quality assurance). This smells like a company in trouble. The have done impressively well with cutting costs to keep their margins up, but that is the only thing that seems to be keeping them profitable. At some point all the low-hanging fat (to mix metaphors) will have been trimmed. Where will growth, or even continued sustenance, come from?

Nikon still has impressive products and an engineering mentality, but in the absence of real customer feedback, an engineering-centric company creates products that only its employees feel are useful. Customer input has to drive some of these decisions.

More to follow on Nikon…


Welcome Back!

After spending a considerable amount of time on my company blog (, I’ve decided to create more content here that’s of a more unfiltered, personal nature that would be of less interest in those seeking weddings and photography.

You’ll find musings on software development, personal projects, various rants and raves on customer experience — all things dear to me.

Since this was a forced upgrade (no thanks to GoDaddy!) without an opportunity to transition easily from my previous blogging engine, there will be some broken images from old content.

I hope you’ll like what you see!


Vancouver Photo Marathon 2011: The Documentary


“Vague” – 2010. Nikon F5, 70-300VR, exposure unknown

Last year I competed in the Vancouver Photo Marathon (#12x12yvr) and ended up with a theme winning photo for “Vague”.

“Vague” happened to be one of the easier themes for me to devise as I instantly had an idea for what to do with it. Executing it along with the other 11 themes was a lot tougher, and the whole event really stuck with me in ways I probably don’t fully realize yet, leading me to think about being involved in some way this year.

Through various circumstances, but mainly due to vacillating about whether to do it or not, this year’s event sold out. Intrigued about other entrants’ experiences throughout the day, I pitched the idea to the organizers about doing a video documentary on the event by following some willing entrants around. Marathon day arrived, and it happened that I had to leave to film a cancer fundraising event later that day so I could only stay for seven themes. Still, better than nothing. I managed to hook up with Ryan Mah (#32) and Ruwan Fernando (#56) outside the cafe right after the first theme was announced, and we went filming along with their helpers (Sara, Garvin, and Jana). I wish I had a helper last year; what a luxury to have a willing model!

It was a lot of fun following them around, recalling some of the same agony I went through last year trying to come up with ideas for each theme. I love that they were so open and accepting of having me tag along and stick the camera in their faces, so I hope I’ve managed to capture some of that in the video. I had a ton of fun and laughs. Thanks, guys! I hope this brings back good memories, and to the other entrants reading this, I hope your day was at least as much fun as it was undoubtedly nerve-wracking, confusing, stressful, and…uh…vague. I know what it’s like.

Perhaps your shot came to you instantly and you executed it perfectly. Or more likely, you struggled, surfed the web on your phone, worked on an idea for a while, backtracked, refined, then nailed it. Or you totally got lost and copped out with a shot you knew in your heart wasn’t going to be great, but you needed to just get it over with? Let’s hear what happened!

Update September 16: At long last, the videos are done. I’ve kept them down to about 3 minutes each. Enjoy!

Warning: some coarse language and mature themes. PG-13 or so.

Ryan and Ruwan’s Story, Part 1

YouTube Link


Ryan and Ruwan’s Story, Part 2

YouTube Link


Ryan and Ruwan’s Story, Part 3

YouTube Link


Ryan and Ruwan’s Story, Part 4

YouTube Link


Ryan and Ruwan’s Story, Part 5

YouTube Link


Note: These videos might be subject to tweaking, and since YouTube doesn’t let you change videos in-place without creating a new one, if you link to them directly they may stop working. Just come back here!

A Brief Fling with Film at the Vancouver Photo Marathon


Last Sunday, I dusted off my trusty Nikon F5 film camera and several lenses, and took part in the Vancouver Photo Marathon, a 12-hour photo contest event in which participants were given 12 themes (one theme released an hour) with which to take pictures in the exact order of the themes.

Yes, this was a whopping single frame of film per theme, which added to the stress.

Tech bits:


  • Nikon F5 – Despite its 13-year age, it still has full compatibility with my new and old AF/AF-S lenses and the very reliable and familiar 1005 area RGB metering system still used today. I hadn’t fired off film in this baby for almost three years but given that it is built to last, I popped in a new set of batteries and hoped for the best. No time to really test further.


  • With 400 ISO Kodak film and rain in the forecast, I brought along some fast primes, like the 28/1.4 and 85/1.4 which also got me nice depth of field control. The 105VR macro lens made the cut, and a good thing too as I used it in at least three of my theme shots.  The 17-35/2.8 (my favourite film era lens, but not so happy on digital) and 70-300VR also made the cut. Two lenses didn’t get used at all – the fisheye and the 28. I was mainly able to shoot outside in decent light despite the weather so the fast lenses didn’t become as necessary.


  • The latest and greatest SB-900 does not work in TTL mode with the F5, setting itself to “A”, but the SB-800 does, so the SB-800 it was.


  • A Manfrotto monopod. A tripod would have been a better choice but mine is heavy and I didn’t have an Arca-Swiss plate for the F5
  • Point-and-shoot camera for stills and video. I mainly used my iPhone instead
  • A Joby Gorillapod for holding my digital camera or flash
  • An off-camera sync cord (SC-29)
  • Remote release
  • Backpack for the gear and a fanny pack for overflow

All in all, this was a pretty substantial load to carry along all day. Next time I might just have fun and use one or two lenses, or maybe a fully manual camera. However I was grateful for the sealed, water-resistant gear that day.

So on to the experience…

One word: GRUELLING! The “Marathon” name is well-deserved on many levels.

Physically gruelling because of my extremely heavy choice of gear. This isn’t much more than I’m normally used to lugging around, but when I’m forced to move around for 12 hours without much of a break in between, to hop up and down the Skytrain / Canada Line station stairs, and to dart in and out of the downtown core from Yaletown with a heavy backpack, it gets very tiring. Add to that mix the heavy rain that day, which had my feet and shoes soaked by about theme 2, and you have a pretty soggy, miserable time. Of course, as is the norm before a big event, I had trouble getting to sleep and had a listless night leaving me desperate for caffeine. But as they say, what doesn’t kill you makes you … umm… hurt a lot.

Mentally gruelling because of the stress of getting that one frame, the frame that says it all, the frame that you start agonizing over the moment you get the hour’s theme, and the frame that you have to take full responsibility for every square millimeter of. Then in a click of the shutter, it’s all over. No going back to fix something that could be improved, no point in having any regrets. Just clear your mind and move forward to the next thing. If that’s not some sort of metaphor for life, I don’t know what is.

I did hear in my head the gentle, abusive tones of Jay Maisel as I went through the day – we’ll see if that helped my pictures or not. Maybe I was just a little low on sugar.

Highlights: Just being able to say I did it. Only other participants will likely understand the full extreme nature of the event. Being able to let go and just try new and funky ideas, like multiple exposure, without a clue as to how they would turn out. Getting a taxi driver to help me out with the last shot. Setting up a makeshift studio in an alley just out of the rain hoping nobody would wonder what I was doing. Buying props to shoot with. Winning a cool draw prize just by being present for the hourly theme draw. Having a chance to shoot film again!

Lowlights: My 17-35 lens decided to drop 2.5 feet out of my backpack onto the road, making a sickening glass crunchy sound. Amazingly there’s just a tiniest scuff on the lens barrel and rear end cap but the lens appears totally fine.

The organizing by the 12x12YVR gang was excellent, so I would definitely recommend it to anybody else to try. Would I do it again? Ask me once my body stops aching from the day! Now, I would be tempted to help out, for sure, so I can subject others to the same exquisite torture. 🙂

I hope to see the results and chat with the rest of this year’s gang at the big reveal and results announcements on October 16!

London Drugs Print Quality – Wet versus Dry Technology


I recently had a chance to compare some prints made on the new “dry printing” technology from the photo labs at London Drugs.  I had a print previously done by them on the traditional photographic chemical “wet” process (often referred as silver halide technology) and was able to informally compare with a new print done on their latest dry technology of the same digital file.  London Drugs’ photofinishing equipment supplier is Noritsu, who have provided both the wet and dry technology printers.  London Drugs recently received some publicity from Noritsu on the launch of their new printers.

In this blog post, I’ll outline my initial thoughts and impressions on the two technologies and do a bit of an informal shootout in a few categories.


Going beyond the marketing speak, Noritsu’s “dry print” process incorporates at its heart Epson’s four-colour inkjet technology, an excellent basis to start from.  I figure the term “inkjet” evokes the wrong image in the minds of consumers, specifically low-cost home printers, which are notorious for costing less than the cost of ink to refill them, and this association seems to compel marketers to substitute terms like “dry printing”, or in fine art printing circles, the term “giclée”.  This association is unfortunate, since there is a large gap between cheap consumer inkjet and high-end inkjet, as much as there is in between car makes and models.  Inkjet done properly can be very good indeed.

Inkjet technology, traditionally a higher-cost medium, is approaching a price point where prints from specialized high volume printers can start competing with the traditional silver halide minilab machines.  The environmental benefits of dry printing are also obvious — the wet process requires lots of chemical mixing and the printers discharge effluent that has to be treated, while the dry machines do not.  Colour inconsistency caused by variations in chemical strength is substantially less of a problem on inkjet, also helping to reduce overall waste from having to redo unsatisfactory prints.

The Prints

While scanned images of the prints don’t really convey what they’re like in person, I’ll show them here for completeness.  The inkjet print of Amy to the left is more faithful in colour to the original digital image although the real differences in colour aren’t as dramatic as it is here; I blame the scanner’s automatic exposure and colour settings in this case making the scan of the wet print a bit warmer in colour.  My descriptions are based on looking at the actual 4”x6” prints side-by-side, not at the scanned images, so you’re better off taking my word for it!

img010_resized img009_resized

Initial Impressions

The dry print gives an extremely good first impression.  When viewing the dry print after the wet print, there is an impression of greater clarity in the dry print that is instantly noticeable — the difference between is like viewing a scene with and without a glass window in the way.  The dry print paper also has a different surface texture and visual quality.  Depending on the lighting, the dry prints have less glare to them, adding to the extra impression of clarity.


Blacks are deep, dark, and black, without becoming the muddy brownish-bluish colour it is on wet prints.  Shadow details close to black are also rendered very well, keeping their details well.  This is probably the greatest difference I see between the two.  Amy’s dress is black, and details in the dark fabric definitely show better on the dry print than the wet.

The gamut (the range of colours that can be reproduced) of inkjet technology is wider than silver halide prints, so saturated colours like deep reds and purples that traditionally lost small nuances of details in a wash of similar colour are very well reproduced on the dry print.  Flower photographers should rejoice!

Compared to the original digital image on a colour-calibrated monitor, the dry print also matches the colours very well, so the overall ability to get a good colour match against what you see (presuming a colour-managed workflow) is good, thanks to the wide gamut and accuracy.

All in all, the colour on the dry print pops.  So much so that people might be a little surprised at first, sort of like when CD’s first came out and people were a little unused to the accuracy, clarity, and dynamic range of the sound.


Like most inkjet prints, fresh-off-the-printer dry prints have a slight vinegary smell that dissipates in a few days.  Fresh-off-the-printer wet prints have a similar sort of chemical smell, though less acidic, and also dissipate.

The wet print has a thicker, smoother coat on top of it that is shinier and reflects more glare back, while the dry print is a little grabbier in texture (it can squeak more if you run your finger over it) and better anti-glare properties.

The paper stock used for the dry print is not quite as thick as the Fuji Crystal Archive paper used for the wet print.  However, I’ve recently heard that the thick Fuji paper may no longer be available and that some thicker inkjet paper may be coming on the market.  So I think this concern may ultimately be resolved.

I did an informal scratch test on the paper surfaces using my fingernail.  I initially expected the grabbier dry print surface to perhaps be a little less resistant to scratching than the slicker wet print, but I was able to scratch both with about the same ease.  In fact, I think the wet print suffered more ultimate harm than the dry print as once you scrape to the paper below, the surface coating is easier to scrape off (just like if you are scraping paint, once you get through between the surface and the paint, the paint comes off easier).  It was a bit harder to do the same on the dry print.

Image details

Inkjet printers do not print continuous tone images — they are made up of microscopic dots of ink dye of one of each of the four ink colours.  The dots are more noticeable in large areas of lighter colour, where fewer ink dots are required and they stand out more in contrast with the white paper, and I find that it imparts a slightly grainier feel to the image in these areas.  Grain isn’t always bad thing as it can impart an illusion of high detail or texture even though they aren’t present in the first place.  Normal viewing of my prints shows good crisp, sharp details on both, and the only area where I saw any appreciable difference between the two was on an area of the image with fine hair (fine hair is always a good torture test for resolution and sharpness).

Below are the small crops (inkjet left, silver halide right) of that area.  As a reference, this is a magnification of an area of about 1/2” wide on the print.  There is difference in extremely fine details, such as in the fine strands of hair and eyelashes that is only just visible when viewing the print at normal distances.  I found this somewhat unusual since both prints seemed quite equal in sharpness except in this area.  I feel that these differences could be in part due to differences in the resampling of the original image to the specific printer resolution and sharpening algorithms applied to the dry and wet prints.  You can also start to see the individual ink dots, or at least the grain effect, as well.

image  image 


Extremely small banding artifacts are sometimes visible at very close inspection.  Banding is the appearance of faint horizontal lines caused by microscopic variations in the feeding of the paper through the printing mechanism (the print head traverses back and forth on one axis as it lays down the ink dots and the paper has to be fed through extremely precisely on the other axis).  Any subtle variation in the paper alignment or feed rate may show as a line in the print where a slightly wider gap or overlap with the previous print head pass occurs.  A similar problem with lines running through the print can also occur on inkjet printers if a print nozzle is clogged, but it is very obvious when this happens.  If you stare at the following image (about 1” wide on the real print) long enough, you might see a subtle, horizontal line about halfway down (right below her fingernail) that runs across the entire width of the image.


Again, in most normal print viewing distances this is usually not visible, but continuous areas of the same colour could make it easier to spot banding if it does occur.  I presume that proper maintenance and calibration of these machines will be extremely important to retain good performance.  Similar problems can happen with wet printing technology as well — dust or other grit can also get embedded into the soft parts of rollers or squeegees and cause scratches to occur on the print surface.  These problem prints are normally spotted by the operator and never get into the hands of customers.

Getting the best quality

Out of camera JPEGs should look really good on the new dry technology.  London Drugs’ philosophy of having the lab technicians colour correct and inspect each image does help to deliver overall pleasing images; their overall “look” favours punchy, contrasty, saturated, customer-friendly images. 

Having viewed and printed thousands of images myself, my personal feeling is that all images do need some level of adjustments for best results, and while automatic correction technology has come a long way, there’s still no substitute for the human eye to spot and correct colour variations.

So while most people are best off allowing the lab operator to colour correct images, there is always the option to request images, especially those with a deliberate colour treatment, be printed directly without corrections.  It goes without saying that if one is to use this option and perform the image corrections manually, then it should be done with a properly colour managed and calibrated system.  For example, laptop displays typically tend to be on the bluish side in order to provide an impression of brightness, and these tend to skew colour.  The best colourspace to set your files to for printing to get a good match, as with most photofinishers, is sRGB.

Though problems have been a rare occurrence in my own experience, London Drugs has always been very accommodating of reprinting items to my satisfaction.


The jump in quality of pictures on dry compared to wet technology is quite obvious and has to be seen to be experienced.  The ease of obtaining image quality previously only available through much costlier home inkjet printing is a great thing, and is as easy as submitting images to one’s local London Drugs.

While I have written about downsides such as the lack of continuous tone, detail loss, and potential for banding, in actual practice and normal viewing distances, they are hardly noticeable by most people (if they are present at all), so the edge goes to the dry technology for its superior colour fidelity.

For professionals, the ability to get high quality prints at competitive pricing may make the need to maintain ones’ own inkjet printer (and the associated cost of ink, paper, and wasted paper) a lot less compelling.  Personally, I’ve chosen not to have my own printer for that very reason.  I hope to test the London Drugs offerings in the future to see how their enlargement sizes compare to both high-end inkjets and traditional wet process.

Having had the (messy) experience of darkroom work, I do lament somewhat the passing of photographic paper, which, as I discovered in my analysis, still puts up an impressive fight against the newcomer.  But there’s no denying the stronger and more accurate colours that the new dry printing technology brings, nor is there denying or slowing down the inevitable march towards the new technology just as there was with digital imaging.  Also the “green” aspect of dry printing is something that we can all enjoy.

* Full Disclaimer:  London Drugs is a client of mine.  This evaluation was conducted purely on my own time and without any prior knowledge or pre-arrangement on their part.  I use London Drugs for my personal and professional printing needs and recommend my professional clients do the same.

Fairness at last? The Turing Apology

Last Easter weekend I watched the film The Tuskeegee Airmen (1995), the story of the struggles that black American aviators went through in order to fly WWII missions for their country. From automatically labeled as unfit to fly due to epilepsy by flight surgeons, to the levels of prejudice from their fellow servicemen, it was both a celebration of human spirit as well as a glimpse into the unabashed levels of racism existing a scant 65 or so years ago.

It reminded me of an equally sad event: The treatment of Alan Turing, the brilliant British mathematician who helped turn the tide of WWII.  Until its declassification in 1974, the fact that the Allies were able to decrypt enemy communications throughout the war was not known. This project, known as Ultra, involved the breaking of the codes generated by the Enigma cypher machine used by the Germans and Japanese.  The Ultra decrypted information gave the Allies crucial advantages such as being able to locate and sink the U-Boats that were strangling Britain’s supply links as well as direct aircraft to the right places in the Battle of Britain.  The counter deceptions used to disguise and safeguard the fact that the Allies had decrypted enemy activities and movements are also fascinating, and thanks to these efforts, Enigma was assumed to be secure throughout the war.

The Enigma codes had to be broken regularly and quickly so that the information contained in the encrypted messages would still be relevant.  Though the codebreaking was a painstaking team effort by thousands of people, Turing played a crucial role in developing methods to speed up the codebreaking, often by the same day.  It is widely mentioned that the breaking of the Axis codes helped shorten the European war by two years and hundreds of thousands, maybe millions of lives.  Ultra’s contributions also extended to the war in the Pacific, where the Americans also were able to decrypt the Japanese communications and gain the strategic advantage.  Even with the insider information from Ultra, the very fact that the war lasted as long as it did with such a cost in resources and lives serves to show how narrow a victory margin it actually was.  Ultra may have made the decisive difference.

Besides his wartime contributions that were not known until three decades after the war, the field of computing owes a great debt to Turing’s pioneering work in cryptography and computer science, including the concept of the algorithm in programming and the Turing machine.  In 1999, Time Magazine named him one of the 100 most influential people in the 20th Century.

Given all this, it is incredibly tragic that his contributions to the world were repaid through a most horrifying and inhumane way.  In 1952, he was arrested and convicted of ‘gross indecency’ — of homosexuality, which was illegal in Britain at the time — when he reported to police investigating a break-in at his house that he was involved with a man associated with the crime.  His sentence was a choice of prison time or probation with chemical castratation through hormone injections (the prevailing belief being that he suffered from a lack of female hormones); he chose the latter.  Two years later, he was dead at age 41, from an apparent suicide by eating a cyanide-laced apple.

His memorial statue in Manchester commemorates his life: Father of Computer Science, Mathematician, Logician, Wartime Codebreaker, Victim of Prejudice.  What else could the world have benefited from if his life had not been cut short?

Thanks to several online petitions and a Facebook Group, the Gordon Brown government did finally issue an apology last September 2009, noting his contribution to humankind, and calling his treatment “appaling”.  Regardless of the sincerity or motivation behind the apology (Some Britons called it a PR stunt), there is some strange justice that Turing’s legacy in computing was the catalyst for getting him the recognition and apology he never received while he was alive. 

So perhaps next Remembrance Day, we should make sure to acknowledge the contributions of civilians like Turing, whose fighting of the war was conducted not with firearms, but with sliderules, pencils, paper, relays, gears, wheels, wires, and valves.  And to acknowledge, I hope, how far our society has come from the days of Ultra and Tuskeegee.

13,000 Days

I attended a great session with wonderful photographer Sandy Puc’ this Sunday at WPPI.  Her founding of, and work with, the Now I Lay Me Down To Sleep organization is both heart-breakingly sad (I wasn’t able to go through the videos at without getting teary) and inspiring.

She mentioned that at one time she had calculated that she had 13,000 days, or about 35 years, left to live given an average life span.  Of course, nobody really knows if something unfortunate could happen any time from living out that full time.  I’m not that much younger than she is either…so it’s reality check time!

What if days were dollars?  How quickly could someone blow through $13,000?  Would we carefully guard the remaining money, doling it out sparingly?  Or would we squander a few here, a few there, by doing not much of consequence?  Yet that’s what we do in the tiniest of increments each day with the time killers that pervade our life.  Should we treat people who steal the precious seconds and minutes of our time (and by extension, or lives) the same as if they had stolen money from us?  Telemarketers, email spammers, I’m looking at you….

Perhaps we do need that countdown clock, ticking away, a gentle reminder that while most of us have time to spare, life isn’t something to be wasted doing things you don’t enjoy, hanging out with people that you don’t like particularly, or nurturing resentment or guilt.

Maybe a limited lifespan is a gift, something that motivates us to accomplish great things with our lives.  It’s a wake-up call to dust off those dreams and pursue them with the same zeal as a person with a limited time left.

Nikon AF-S 50/1.4G and Sigma 50/1.4 DG EX HSM

I’ve had the fortune to have these two lenses to shoot with for a while now.  I own the Nikon (Nikkor to be exact) and the Sigma was a nice loaner from Gentec International, the importer for Sigma in Canada.  I figured I might write down a few thoughts.


The "normal" 50mm lens is typically one of the cheapest lenses, included with many a film SLR.  On full-frame, it is a normal lens, offering a perspective that is fairly close to what the human eye sees, whereas on a cropped sensor camera, it offers a medium telephoto view, quite suitable for portraiture.  That being said, I rarely use one!  Why?  Simply because I prefer to use either the wider Nikkor 28/1.4, the longer 85/1.4, or for weddings I’ll use a zoom for flexibility.  I have recently become reacquainted with the 50mm range with these two lenses and it’s been a nice change of pace.

Build Quality

The Sigma certainly makes an impression with its size and heft.  It is a hefty beast — you get the feeling that the Nikon could easily slip inside it.  It’s a bit hard to believe that these are both 50/1.4 lenses — in fact, the Nikon 85/1.4 is just about the same size as the Sigma.  It’s obvious that Sigma threw their technological know-how at this lens.  It has a huge 77mm filter diameter, a huge front element, and is packed with aspherical elements, none of which are to be found on the Nikon.  The balance is pretty good, and it has a decent, solid feel.  The surface finish is a slightly rubbery, textured finish that seems to be flocked on, but I’m left wondering how durable it is.  I’ve always found Sigma’s cosmetic details just not up to the same quality, and this has probably nothing to do with the internal quality, but just purely the little details that are different.

Nikon’s offering matches their cameras, as would be expected, with the same textured plastic housing as its consumer-grade lenses.  Its build quality is different from the Sigma — not necessarily better or worse, just different as it almost feels like a different target market.  It has a rear rubber gasket to help seal off the gap between the lens and lens mount, has a useful lens indexing dot on the housing to help align the lens properly when mounting it.


Both lenses are awfully close in performance in many, many respects.  I don’t test lenses methodically with resolution charts, rather with real-world subjects, so I can only give you a qualitative feel for them, but my findings appear to echo the general sentiment on the Internet.  So here goes:  Both are sharp wide-open in the centre, but the Sigma may have a tiniest of an edge.  The Nikon appears to better the Sigma in the corners.  Wide open, both lenses do have a slight veiled softness to them, as would be expected.  By about f/2 or f/2.2, both lenses have cleared up, with Nikon probably still a bit ahead in the corners.  Above about f/5.6, they’re really tough to tell apart.  They’re as sharp as can be, even on the D3x I was testing with.

One thing that struck me right away was that the Sigma isn’t a 50mm lens.  Or the Nikon isn’t.  The Sigma is maybe around a 45mm lens in relation to the Nikon.  The difference is definitely noticeable when comparing both lenses side by side from the same shooting spot.  This may be focus breathing (i.e. the focal length changing a little depending on focus distance), but my shots were taken over a distance so that shouldn’t be a factor.  My particular sample might suffer from a tiny bit of a centering issue, with sharpness not quite the same from edge to edge, but you would have to be pixel peeping the D3x image to see this.

Fall-off (vignetting) is noticeable on both, but the Sigma is definitely better than the Nikon.  The Nikon takes until about f/2.8 to be mostly visibly clear of the darker corners while the Sigma is similar by f/2.  The Sigma’s ability to evenly light the frame is impressive, no doubt due to the oversized front element.

Bokeh appears better on the Sigma as well, wide open anyway.  The Nikon isn’t too bad, and it’s certainly better than the AF 50/1.8, which I think is one of the uglier lenses in this department, at least in my lens collection.  However, the Sigma is definitely smoother and somehow able to generate larger blur circles than the Nikon.

The Sigma is a faster focuser than the Nikon, but louder.  The Nikon is extremely silent.  I may have noticed perhaps a little bit more AF hunting on the Sigma, but it might just have been me.

Bottom Line

Not too surprisingly, the lenses are quite similar.  Sigma has taken a bit of a brute force approach with the lens, probably with a mission to make it the best 50mm SLR lens for full- and crop-frame cameras.  The lens gives sharp, smooth images, focuses very quickly, and feels good in hand.  It feels more biased towards someone that has some very specific needs for shooting wide-open or close to wide-open and retaining very good bokeh and optical performance as well, especially corner fall-off.  You can definitely shoot some nice portraiture on this lens.   The downside is that it is a fairly chunky lens, something that one might think twice about putting in the bag.

Nikon, on the other hand, has opted to keep the spirit of the 50mm lens and kept it pretty compact and light, yet endowing it with excellent performance.  It gives a different combination of characteristics, favouring corner performance and small size and while giving up a little on bokeh, fall-off, and autofocus speed.  It’s perhaps a bit more of an all-rounder, jack-of-all-trades lens.  And it’s cheaper than the Sigma by over $100 CDN.

Which one to choose?  I think it’s pretty clear you’ll get stellar images from either and they’ll be pretty tough to tell apart if you stop down a few to f/5.6 or better.  If you’re shooting wide open to gain a specific look, like great bokeh, or really crave the best low-light performance, the Sigma does everything a pro or discerning consumer would want from a 50.  If that isn’t quite your cake and you simply want something to put in your bag for the rare occasions when you encounter low light, or you want amazing quality across the frame, then the lighter, cheaper Nikkor would be the way to go.