Beginning Coding Tips

Since I’ve been involved in coaching at BCIT, I’ve started to get a better insight into the learning processes and tools required to get a grasp of ‘programming stuff’.

Programming isn’t easy by any stretch, so I may not be able to nail exactly what it is that may be that single ‘aha’ moment of clarity that makes programming all easy from that point on, or if there are required to be many of these along the road. Or if there is a road, or a meandering goat path, or a highway.

I’ll begin with the basics you have to understand. I won’t rehash a lot of what should be covered, just various observations and clarifications.

Basics

Variables

OK, so maybe not that interesting, but the whole concept of variables, their being at the very simplistic level a box that holds something in memory.

Understand naming conventions – does your language care about upper and lower case when naming variables? Javascript does, and is probably the cause of the greatest frustration with all levels of coders. Just be aware that if something isn’t working, this could be the simple reason.

Variable Types

Variables have a type associated with them — this is a bit of a peek behind the curtain into how they might be stored internally. Usually a variable will either be a number, a string, or an object.

A number is basically something you can conceptually punch into a calculator  to do some math with and it’ll make sense. Historically, computers have liked to store numbers in specialized formats so you may find in other languages different types that represent integers (whole numbers only) and floating point (all numbers, including fractions) and your having to choose the right type for what you’re doing.

A string is literally just text, aka a string of characters you can type on your keyboard (and many more you can’t easily type) – alphabet, numbers, emoji, characters from the world’s languages, etc. Strings are quoted with single or double quotes.

An object variable is, well, an object (see objects later).

Automatic Type Conversion

Types can be converted from one to another if it makes sense – the string “123.45” can be converted into the number 123.45. But the string “A123.45” can’t be converted.

Javascript attempts to do automatic type conversion when it makes sense; but beware of the rules; i.e. the following weirdness:

"3" * "2"    // this returns the number 6
"5" + 1      // returns the string "51"
"5" - 1      // returns number 4

You can see that it’s beneficial to explicitly tell Javascript what you want done instead of relying on the automatic type conversion, which you can do by calling the Number() and String() functions:

Number("3") * Number("2")   // returns 6
"5" + String(1)             // returns "51"
Number("5") - 1             // returns 4

Variable Scope

Understanding when a variable is ‘alive’ and valid is a huge concept. Most current languages allow the same variable name to be declared again within a different lexical scope; i.e. inside a function as well as in the surrounding code. These are logically two different variables, so do not get confused.

In this Javascript example, the variable ‘a’ is declared twice; within the function, and outside the function. The inner ‘var’ makes a whole new local variable ‘a’ that is independent of the outer ‘a’ which has a global scope.

var a=3;      // global

function f()
{
  var a = 5;   // independent (local) variable
}

function g()
{
    // in here, you can access global 'a'
}

Whereas in the following example, the inner ‘a’ actually refers to the outer ‘a’, which may or may not be intentional.

var a=3;      // global

function f()
{
  a = 5;      // affects the global a
}

Quirks abound in Javascript due to its automatic variable declaration:

function f()
{
  a = 5;      // creates a global variable a !!!
}

Having functions ‘reach outside’ their scope is usually a bad practice and can lead to unintentional side-effects.

Best Practice:  Always declare variables with ‘var’ instead of allowing Javascript to figure out if you’re using a new variable or an existing one.

PHP Note: Of course, PHP has to different. PHP scopes variables only to the scope they are declared in. This is a very frequent pitfall for PHP programmers used to other languages. Global variables look like they’re declared, yet they’re unassigned when you try to use them.

Access to a global variable in an outer scope has to be explicitly declared. Why? probably to protect you from accidentally changing a global variable. So to access a global from a function, you need the global keyword like so:

$g = 5     // a global variable
function addG($num)
{
  global $g;    // have to have 'global' here..
  return $g + $num;
}

Functions

Understanding that functions are ideally written without any reliance on external knowledge of who’s calling the function. In standard terms, this is to reduce side-effects as noted above.

Functions written this way can also be modular; think about giving someone else this function to use in their code – will it be possible, or will you have to do something to their code or yours in order to make it work properly?

Parameter Passing

Understanding that parameters in a function are placeholders within that function and allow you to specify functionality without needing to know anything about who’s calling the function.

This again is an example of scope – function parameters are simply just scoped to the function.

Objects

Understanding the concept of objects is essential in working with the HTML DOM and pretty much needed to anything useful such as storing/retrieving data. So much functionality is wrapped inside objects, or arrays of objects, and so forth.

Understand that just about everything has been ‘object-ified’ — it’s one way of looking at the world in programming terms of objects, functions, fields, and events.

Arrays / Collections

Objects become more useful when you can have more than one of them (money, for example, or students, or houses).

Understand moving through collections of objects one at a time, or directly through key values. You’ll do this often; i.e. grab a list of student objects and display all names on the screen.

More advanced concepts

Dig into HTTP

Understand a little bit about the mechanics behind web traffic – GETs, POSTs, cookies, and so on, so that you know that a server is involved somewhere – not everything useful executes purely in the browser.

This positions you well to use the raft of services available through REST web services, etc., and inevitably to try to debug some of your applications that use web services.

Object Models

An Object Model, viewed through Javascript googles is a bunch of related objects, with properties and functions to manipulate those functions. An OM tries to model something in the real or abstract world in some sort of logical way so that you can do useful things with it programmatically. An Object Model for a car might look like this:

var car = {
  make: "Mazda",
  model: "CX-5",
  radioPresets: [ "104.5", "96.1", "89.3" ],
  startTheCar: function() {},
  driveTo: function(location) {},
  pickUp: function(persons) {},
  setRadioToPreset: function(presetNo) {}
}

So a car object becomes a handy way to encapsulate in one handy package useful information about an instance of a car (“properties” such as make and model) and to also encapsulate the useful functions a car can perform (driveTo a location), otherwise known as “methods”.

The packaging aspect is a convenience in that you can call methods and access properties using the “.” operator; i.e.

car.make = "Volvo";
car.pickUp(["Martin", "Kathy"]);
car.driveTo("Home");
car.setRadioToPreset(1);

You can hopefully envision a world where objects interact with other objects (cars interacting with road objects), objects containing other objects (cars containing arrays of people objects to represent their occupants), objects composed from other smaller objects (car containing an engine object, a radio object, 4 wheels objects), each composed of sub-objects.

Aside for sticklers: the code above is really for illustrative purposes; current Javascript provides a way of defining objects through functions and prototypes in a more object-oriented way (i.e. compared to something like Java or C#).

HTML DOM

More on DOM stuff later; for now, understand that this is an object model (hence the “OM” in “DOM”) that represents in a tree-like form what you see in a browser. You can manipulate the objects in the tree (setting properties and calling methods) to control the visuals in the browser window.

Javascript braces hell

Start to understand Javascript’s approach of functions as parameters to functions, and the resulting nested curly braces. Format your source code properly to foster readability

Events

Understand the concept of events as notifications that occur due to some user or system action.  Remember that everything is an object, and objects can trigger events when something happens to them – like when a page is loaded, or when an image notices that a mouse pointer has moved into its field of view.

Frameworks

Finally, we get to Frameworks. Frameworks are just pre-written code, not magic. The advantage is building stuff faster, likely more bug-free, and more functionality than doing it yourself. The downside is frameworks sometimes don’t do exactly what you want, and you can easily get into a kid in a candy store situation of grabbing this UI gadget that uses such-and-such framework, and another one using a different framework — with the downside of bloating your page and making it slow or buggy if the frameworks don’t play nice with each other.

But they sure save a lot of time, so just be aware and do your homework.

Philosophy

Reusability and Modularity

I mentioned one tip earlier – write your code in a way that someone might be able to re-use something you wrote without really needing to modify it much, or at all. A simple rule

If you have a simple function, regardless of how trivial, think about whether it could be refined in a way so that someone could either include the piece of script, or include the function (copy/paste) and use it right away. If not, then it means that there is some dependency on something that exists that shouldn’t be there — probably on a variable you have in your code.

Continuous Improvement – Refactoring

It’s rare for a beginner programmer to understand how best to break code into reusable and modular bits.

On one hand, if you don’t break down a module into potentially reusable pieces, then you end up having a huge, monolithic piece of code that looks like it has a lot of repetitive stuff in it.

At the other end of the spectrum, you might break something down into much too many small pieces that, while modular, may seem excessively fussy, and may make code readability suffer.

What to do? I typically notice code that I’ve written a few times; say,

document.getElementbyID("welcome").innerHTML = "Hi There";
document.getElementbyID("status").innerHTML = "123";
document.getElementbyID("status").innerHTML = "456";

And think…maybe I can make a function with parameters to do the same thing, and name it something more descriptive. So I refactor my code a little:

function DisplayText(element, text)
{
   document.getElementbyID(element).innerHTML = text;
}

DisplayText("welcome", "Hi There");
DisplayText("status", "123"); DisplayText("status", "456");

Better! This illustrates a simple way to cut down typing at least. And you can go one further:

function DisplayText(element, text)
{
   document.getElementbyID(element).innerHTML = text;
}

function DisplayStatus(statusText)
{
   DisplayText("status", statusText);
}
DisplayText("welcome", "Hi There");
DisplayStatus("123");
DisplayStatus("456");

And so forth. Is this better than just the straight document.getElementByID method? Not necessarily, as perhaps readability is impacted (someone will have to check to see what DisplayStatus() does the first time they see it).

But what happens if you want to change where the status message goes? Then you just go to one place, DisplayStatus() instead of the two (or more) places in the code you would need to if you used the document.getElementbyID() technique.

All I can really advise around modularity or creating functions is to be alert for

  1. repeated code – that may be a candidate for creating functions to do stuff and to minimize cut/paste errors
  2. useful functionality – maybe I need to put “, ” in between two strings to format a name (“Doe, John”). That might be useful elsewhere, so a function would be a great idea.
  3. centralized functionality – things I know I may want to change often or later – like DisplayStatus() above. Maybe I want to make the message appear in a red colour or different font. Making it centralized ensures that when I change the function, all possible places that I display the status also change with it.
  4. deferment – should I just toss in a function as a placeholder for now, and get on with the rest of the coding that I have in my head, and get back to that later?  I.e.  BigMathFunction(x, y, z, a, b)
  5. plain old readability. Can I just chunk up a huge function into smaller functions and call them in the same sequence; i.e. BeginningPart(), MiddlePart(), EndPart()

 

 

 

Framework Bloat and Missing Fundamentals

I’ve been interviewing with several large companies for a new role in software development management. I got through to the panel round of interviews with a nationwide retailer to work with their e-commerce presence.

Prior to this, I did a little homework and did what I often to do gauge the maturity level of a company’s website, looking for any obvious issues, the kind of technology used, and evidence of UX/UI consciousness.

The Issues

I found some glaring issues, such as terrible page load times stretching up to 8 seconds, which is a huge problem. In addition, page analytics from GTMetrix gave it a failing mark, as did Google’s Page Speed Insights. A product page required a whopping 180+ HTTP requests to load — a number I’ve never seen before (most sites keep it to under a third of this value).

All are red flags that indicate the need for attention – the page will take time to load, causing a potential speed penalty on Google, not to mention customers will drop off; and the extra load on servers would potentially cause scalability problems.

The interview was with my future (non-technical) boss and with several of their existing web front-end developer team members that would have been my subordinates. After the normal pleasantries, the developers proceeded to fixate on my thoughts about the latest in front-end technology. I’m repeating my thoughts in this blog post.

I stated that frameworks and technology change and by necessity, there is always a need (and desire by developers) to keep trying new frameworks, but there are some issues with the frameworks that exist today that need to be understood.

Framework Code Bloat

These issues have to do a lot with the size of frameworks/libraries. The interviewer was critical about my experience with one of the older UI frameworks that we used in my previous projects. But a framework is just a framework – a simple way to avoid the tedious Javascript programming needed to pop up a dialogue box or a panel or a lightbox – there really isn’t much magic to it. There is nothing that a framework provides that cannot be achieved by a good programmer – just with a lot more time and frustration.

The downside of frameworks is the amount of code required to include a framework, especially if no slimmed-down versions exist. These will typically include a call to the server or CDN to pull the framework javascript includes, plus a call to pull the related CSS and sprites. These calls can introduce extra delays in processing the page, increase browser memory usage. One blog article showed 90% of CSS not being used in the Bootstrap demo pages.

Single Page Applications

I went on further to emphasize that a lot of front-end design is necessitated by having to bow down to Google’s presence, and that several technologies are currently incompatible with good SEO – SPA (single page applications) built on frameworks like Angular are terrible for SEO, and not a good candidate for building out sites that benefit hugely by having their catalogue pages indexable.

I went on to say that a single page product view is enhanced by bringing in other critical information, such as in-stock information through better investment in backend and web service systems and established means such as web service and AJAX calls to bring information to the user; and that the latest UI frameworks, while fun, don’t replace the need to deliver these fundamental features, when the goal is increasing conversions.

 

Don’t Forget the Fundamentals

But do not forget to keep pushing the fundamentals – page speed, functionality, SEO-friendliness, user experience. Not one of these elements would have been improved by plugging in the latest JQuery/Angular/Web Component framework. What is needed in this case was a roll-up-your-sleeves focus on reducing some of the code bloat on these pages. These frameworks may speed up development, but have the downside of slowing down responsiveness for the client.

Using tools such as Google Pagespeed Insights, GTMetrix, the performance profiling developer tools in Chrome, Safari, and Firefox, and making the necessary adjustments to the HTML and server-side are key to helping provide that great first visit impression – before the user has even started interacting with the fancy libraries and frameworks – or, even more fundamentally, showing up in a search in the first place.

 

Regardless of the interview outcome, it was a very, very instructive process. I wish them all the best of luck, of course.

Bye, bye Nikon, hello Canon

(This was written in 2015 — things have changed since then, not for the better for Nikon).

For 25 years I was a Nikon fan, bordering on fanboy/bigot. That’s changed now, and the reasons for the change I found were both emotional and logical.

Interestingly enough, my very first SLR camera wasn’t a Nikon. It was a Pentax that I bought on a whim while on a trip to Edmonton, primarily because my wife-to-be had a Pentax camera.

Not too long after that, I was convinced to buy a Nikon F-801 by my father-in-law to be. He was also getting into Nikon after some years with Pentax (Pentax was a family thing).

Thus started a 25 year journey with Nikon. Through its tenuous days as a camera company when autofocus was invented I remained loyal, occasionally checking out the competition (Canon) but not liking what I saw despite intriguing technology. I continued to buy the top of the line pro cameras (F4s, F5) and lenses, building a system to last.

Nikon was a conservative company at the time, focused very much on engineering and lens design. Film cameras had pretty much reached their peak around the late 90’s / early 2000’s, with new designs basically being rehashes of old designs. The emergence of digital re-energized the camera market, to the relief of manufacturers.

When digital SLRs appeared, I continued to be a fan, despite some initial quality differences. I envied some of the Canon offerings at the time, which offered some fantastic high-ISO results in dim lighting. But I remained a steadfast supporter of Nikon despite Canon arguably being the go-to pro system at that time. After all, I could still use my lens collection on the new bodies. Nikon was just “better” in terms of having this spiritual connection to its users, and a heritage of engineering excellence.

When the D3 came out, Nikon started to take the crown away from Canon. Nikon users now had bragging rights with what really amounted to a save-the-business set of products (D3/D300). This ushered in a generation of excellent products that brought in unsurpassed image quality (culminating in the current D810, but really, all Nikon products using the Sony sensor technology have, in my opinion, outclassed their Canon counterparts). Canon users started switching to Nikon.

So why, when I was already an owner of a large Nikon lens system, along with the best image quality in the industry would I even contemplate switching over to start again, with less image quality?

The answer is twofold. The first would revolve around video, and my needs for a system that could support stills and video. Canon certainly had the lead there, with the 5D series and their Cinema EOS series (I own the C300 and C100). I could certainly still use my Nikon glass with adapters on the Canon bodies, and in fact older Nikon F-mount lenses are very popular due to having aperture rings, so that wouldn’t have been a substantial problem. Still, having native compatibility was a nice-to-have feature.

The second revolved around the Nikon D4. The logical progression would have me upgrade to the D4 as a mere formality. I loved its predecessor, so it would have been an utter no-brainer to move up in terms of features, resolution, and quality.

But Nikon messed it up. They introduced a new battery system and in a giant indecision introduced a dual card system with different formats (Compact Flash and XQD) instead of the previous dual CF system. This caused problems for pros like me — you could not as easily shoot with your previous camera (D3) without bringing extra crap around (another charger, another different set of batteries), and another set of cards.  Previously I had shot with a D3x and D3s side-by-side. Now I couldn’t ease into the D4 world with a D4/D3 combination. I’d have to either buy two cameras or stand pat.

The D4’s video capability, had it been decent, may have tipped me to Nikon. But it wasn’t. It was awful compared to the cheaper Canon 5DII/III. For the flagship camera to have such middling video was a disappointment, but in all fairness, Nikon is/was not a video company, and the technical challenge for what was at the time may not have been worth it.

In contrast, Canon’s 1Dx camera introduced a backward-compatible battery (LP-E4n) and charger that can power the older cameras. Perfect. No friction.  1Dx video is quite good as well.

As I’ve become more involved in customer and user experience in my day-to-day work, I’ve become more sensitive to this type of friction — making someone’s life just a little harder may not be a big deal, but small frictions add up until, like me, they move over to a competitor and nothing will bring them back.

In my case, I knew I was giving up superior image sensor quality, great support (at that time, Nikon had a service depot 10 minutes away from me with excellent service — now, alas, they have shut it down). The peer pressure (Nikon was this wonderful, underdog company, sort of the rebels against the evil Canon empire) from family and work (completely Nikon-focused) made the decision even harder to contemplate and even admit.

But, when you create enough friction, enough reasons to go elsewhere, and you create the impression that you do not understand how your customers use your products, you are vulnerable to your competition — to a new upstart, cooler rival, or even to your old nemesis. This is what ultimately happened in my decision process.

Even absolute superiority of one’s product is not enough if it is not taken in context with the rest of the product’s ecosystem. Nikon makes better images, but the inputs into the system – lenses, ergonomics, accessories, batteries, cards, support – all factor into the decision for a professional. It came down to “what system can I trust to give me the best quality image in the most adverse shooting conditions?

Recently, I feel that Nikon has gone down a perilous path that unchanged will see its demise. For example:

  • Denying issues with the D600 sensor dust (a design defect where bits of material slough off the shutter mechanism and deposit on the sensor). It took a massive effort, including China denying sales of cameras to Nikon, to finally get admission of a problem
  • D800 focus sensor alignment issues (again, not admitted until a huge internet uproar)
  • Closing of service depots
  • Lens firmware issues (several recent lenses have had recalls for compatibility firmware updates). To add insult to injury, customers have to pay for shipping to the (no longer convenient to some) service depot.
  • Overall build quality of lenses is variable compared to competition (e.g. 24-70/2.8)
  • Terrible design decision for their 70-200/2.8, a staple pro lens, to “cheat” on focal length.
  • A general sense that maybe one shouldn’t buy a Nikon product at introduction until the bugs are worked out
  • No wireless (radio) flash support. This is a great feature of the Canon 600EX-RT system I enjoy. This creates reliability in the field under tough shooting conditions versus the Nikon line-of-sight SU-800/SB9x0 system which creates a total crapshoot knowing IF the flash will fire even after the photographer has painstakingly positioned the flashes for successful firing.
  • Lack of direction or strategy for its mainstream crop-sensor (DX) products. Nikon appears to want to push users upwards to its full-frame products (where higher margins exist).
  • Lack of compelling mirrorless product (pro/prosumer level) or pro camera with at least an EVF option.
  • Underwhelming software offerings (Capture/Capture NX) that have not kept pace with the ease of smartphone transfer, or the needs of bulk processing by pros.

I feel that Nikon is not listening to its user base, is cutting costs (sometimes the cuts appear too deep, like with quality assurance). This smells like a company in trouble. The have done impressively well with cutting costs to keep their margins up, but that is the only thing that seems to be keeping them profitable. At some point all the low-hanging fat (to mix metaphors) will have been trimmed. Where will growth, or even continued sustenance, come from?

Nikon still has impressive products and an engineering mentality, but in the absence of real customer feedback, an engineering-centric company creates products that only its employees feel are useful. Customer input has to drive some of these decisions.

More to follow on Nikon…