Visual Studio 2017, .NET Core, MVC and EF on Mac

I’ve started playing with Visual Studio 2017 on the Mac, pulling across a sample MVC Core + Entity Framework Core tutorial project on the Microsoft site. The tutorial can be found here.

I run a Windows in a VirtualBox VM on the Mac, so I have a full Windows 10 + VS 2017 Community install with MSSQL 2016 on it.

On the Mac side, I was trying the latest VS 2017 Community for Mac, along with ASP.NET Core.

I was hoping to suss out how much cross-platform compatibility there was, and how much of a poor cousin VSMac was compared to its Windows counterpart, which is still one of my favourite IDEs.

Installing

Installing VSMac is pretty straightforward and .NET Core is a separate install documented on the Microsoft website.

Moving the project over to the Mac side

I started and finished the project on the Windows VM, running on MS Sql Server. This project opens up fine on VSMac. I had some high expectations it would, and it does. It even builds and runs!

Of course, the lack of database access on the Mac side needed to be addressed, but the web project hosted inside VSMac started up and ran just fine in Safari using localhost and a custom port number just as in the Windows VM and Edge.

Converting to use MySQL instead of SQL Server would be the next challenge.

Adding Pomelo for MySQL Support

I didn’t find too many options for MySQL. There are some hints on the MySQL blog that MySQL Connector works with Core, but I couldn’t actually find a package for the Mac listed in their downloads so I gave up. The one that looked OK is Pomelo.

Pleasantly enough, Pomelo.EntityFrameworkCore.MySql  is one of the packages listed when firing up the Project | Add NuGet Packages… option in VSMac.

Simply add the package to your project at this point and you’re almost ready to go.

Changing the Entity Framework Provider to MySQL

This was also fairly straightforward. In the Startup.cs file, the database context needs to be adjusted to use MySQL.

From:

public void ConfigureServices(IServiceCollection services)
{
  // Add framework services.
  services.AddMvc();

  services.AddDbContext<WebTestContext>(options => options.UseSqlServer(Configuration.GetConnectionString("WebTestContext")));
}

To:

public void ConfigureServices(IServiceCollection services)
{
   // Add framework services.
   services.AddMvc();

  services.AddDbContext<WebTestContext>(options => options.UseMySql(Configuration.GetConnectionString("WebTestContext")));
}

The connection string in the appsettings.json was also changed to the MySQL flavour:

 "ConnectionStrings": {
 "WebTestContext": "server=localhost;port=8889;database=thedbname;uid=myuserid;pwd=thepassword"
 }

Once this was done, running

   dotnet ef update

on the terminal command line in the project directory (where the .csproj file is located) should attach to the configured MySQL instance and create the required tables (in this case, just one) for the sample project.

And voila – things run and the database is created. Quite impressive. You can even add a new movie. But alas, not a second movie…because…

Add Auto Increment to ID field in MySQL

For some reason, the Pomelo provider, or .NET, or something somewhere doesn’t know that EF relies on the ID field on the table being an auto incrementing field. This causes any table inserts beyond the first item to fail with a MySqlException: Duplicate entry ‘0’ for key ‘PRIMARY’error.

The fix is simple enough; either:

  1. Go into phpMyAdmin and change the row property for the ID column to check the A_I (Auto_Increment) box, then save changes; or
  2. Run a SQL command to do the same thing – something along the lines of
ALTER TABLE `Movie` CHANGE `ID` `ID` INT(11) NOT NULL AUTO_INCREMENT;

Notes

  1. Entity Framework Core references must be added manually when using VSMac – you can’t add this through NuGet right now. The .csproj file must be edited manually to create the references. This appears to be a bug / limitation with just VSMac. Since I started the project on VSWin and moved to VSMac, I didn’t have this problem. But I did with a new project started on VSMac. I suspect that moving back and forth between the environments may be quite feasible.
  2. Scaffolding for helping create Insert / Update / Create views does not appear to be present for VSMac while VSWin has an option to create an MVC Controller along with associated views. These would be really handy to help build out your basic CRUD functionality. However, there may be options using Yeoman. More to come.
  3. Razor view tag helpers do not seem to provide syntax highlighting in VSMac.

Next Steps

More research to come, but the next step will be configuring the project so it runs nicely under NGINX or Apache as a Kestrel reverse proxy on the Mac without Visual Studio hosting.

 

Bitten by PHP DateTime mutability

I was finishing off phasing in new content and features of my latest project, the Surrey International Writers’ Conference (www.siwc.ca) website when some reports came in of problems with the published dates of the various writing workshops being held.

Workshops all adhere to a common schedule for each day of the 4-day conference, so I created a custom TimeSlot WordPress post type to centralize timeslots to allow sorting and grouping of workshops together on the master schedule. The data entry page for each workshop has a drop-down list box with friendly names like “Friday 10:00 AM – 11:30 PM”.

The actual format of the workshop date identifier string is “dd-hhmm-hhmm”, where dd is the day (05 = Friday, 06=Saturday, 07=Sunday, etc.) and the hhmm values specified the start and end times for the workshop. Our example workshop TimeSlot ID string as stored in the workshop metadata in WordPress would be “05-1000-1130”.

I had arbitrarily assigned Day 5 to be the Friday of the conference. This allowed the conference in future to potentially start earlier, say on Wednesday (3), or Thursday (4) to accommodate master classes or other pre-conference activities without needing to re-enter new timeslots every year (timeslots do not change year by year).

So when calculating the actual date of the workshop to display to users, I needed to subtract 5 days from the global start date of the conference (expressed as the date of the Friday), then add back the number of days in the workshop date structure. For example, for a Saturday 10:00 AM – 11:30 AM workshop, the ID would be (06-1000-1130).  Taking 5 days away from date of the Friday (October 20, 2017), then adding 6 days gives us October 21, 2017 as the date of the Saturday.

That’s where the problem arose…

Here’s the original code, taking 5 off the conference start date, $siwc_conference_start (DateTime object), then adding the workshop day value that I extracted previously from the string.

// $day = workshop day, 5=Friday, calculated previously

// following line modifies $siwc_conference_start!!!

$thisdate = $siwc_conference_start->sub(new DateInterval('P5D'));  

$thisdate = $thisdate->add(new DateInterval('P' . $day . 'D'));   

return $thisdate;

This caused no end of problems! I was getting seemingly random dates for the Friday – like October 23, October 21, etc.

After much tracing, I isolated it to the fact that the all-important $siwc_conference_start variable, which should have been, and needed to be, constant, was actually changing in value between calls! The only possibility was the sub() method call (and the add(), of course, upon later reflection) affecting the object directly rather than simply returning the result of the calculation.

This outlined an issue I hadn’t been aware of in PHP regarding the mutability of the DateTime class, which $siwc_conference_start was declared as.

The sub() method was actually acting upon the $siwc_conference_start object, subtracting 5 days from the conference, rather than returning an object with the 5 days subtracted from it and leaving the original alone.

The subsequent add() method was adding a different value back to the conference start date. So the conference start date value was bouncing between different values, some actually correct, depending on prior calls. This also explained why I hadn’t caught the bug in my earlier tests – my test data didn’t exercise the function sufficiently (either it had all Friday data, or I didn’t notice the discrepancy).

The  fixed code:

// $day = workshop day, 5=Friday, calculated previously

$thisdate = clone $siwc_conference_start;
$thisdate->sub(new DateInterval('P5D'));
$thisdate->add(new DateInterval('P' . $day . 'D'));

return $thisdate;

This is one fix – using the clone keyword to create a new object containing a shallow copy of the $siwc_conference_start object’s contents. The sub() and add() methods will act on that copy, not affecting the original.

There is also a second fix: declaring $siwc_conference_start as a DateTimeImmutable class introduced in PHP 5.5. This new class is an admission that the original DateTime class implementation was flawed in allowing the original object to be changed. The new implementation in DateTimeImmutable returns a copy of the object as would be expected.

Overall, this highlights a couple of interesting things around object mutability, when it is expected, and when it is not. The original implementation breaks a couple of different programming language expectations…

  1. Caused by my having used other languages’ DateTime types. For example, the .NET DateTime is a value type, therefore would behave as expected in the first implementation.
  2. Caused by expectations that DateTime should be implemented best as a value object rather than reference object. There are few situations where DateTimes are likely to be better off as reference objects (e.g. objects meant to be shared) than value objects. It’s not that the implementation was technically wrong, but the expected usage by programmers did not align well with the initial design decisions for this class.

This is also a lesson for creating a proper test harness for functions like this that are easy to test. I could easily have created a sample set of timeslot IDs and corresponding results, and this would have been identified before going into production.

You’re doing Scrum wrong

I’ve had the opportunity to talk to a few local organizations on Scrum, and also had attended some formal Scrum courses (scrum.org) to sort of align back with the mothership on what Scrum is, or should be.

My conclusion is that there is a lot of confusion around Scrum practices which leads unsurprisingly to the questioning of its benefits and attempts to modify it.

An overarching problem is that Scrum simply isn’t being executed properly.

I have an analogy for this – being Canadian, hockey is where the thought pattern normally flows, but is applicable to any team sport. Hockey, like any sports, has a body of rules and referees execute those rules on the rink, field, or pitch.

Likewise, the rules of hockey do not tell you how to play hockey. They do not tell you how to execute a power play, or that you can pull the goalie for an extra attacker if you’re down a goal, or you should shoot high on a particular goalie because he goes down on the ice too soon, or that you should line up the players this way or that way to counter the lineup on the other side.

The richness of Scrum is in the proper execution by a team; the rules are mainly the guardrails along the side to make sure we’re playing all playing the same sport – hockey, not rugby, or football.

Yet we seem to persist with Scrum leaders that only seem to know how to referee, not coach. We need those coaches that can see the rules as its creators did. Scrum is both incredibly flexible because its rules don’t actually create boundaries as they appear to.

I’ll write further on this, but let me touch upon one concern at a time. Here are a few objections I’ve heard:

  1. Our Lead Developers are swamped having to do the estimation work
  2. Scrum doesn’t handle out-of-band requests like emergency bug fixes
  3. We need longer than 2 weeks (or a month) to release something because our systems are complex and changes are required in many different places
  4. We can’t realistically release an Increment into production every two weeks because our testers find bugs in the test environment and we have to fix them.
  5. We have a continuous improvement or continuous development methodology and we like to deploy during a sprint. We can’t always wait until the end of the sprint.

Here are some potential answers:

  1. A fundamental concept of Scrum is the cross-functional Team. The Team is the one tasked to help create estimates. Having a Lead role runs somewhat counter to this idea. At the very least, it takes some control away from the developer that may actually be doing the work (assuming it’s not the lead developer). And it’s definitely causing the lead developer a lot more work to take on this extra responsibility that could be shared among the team. I would suggest that the estimation work be done by the whole team at the appointed time (a backlog refinement meeting once every sprint.
  2. No, Scrum doesn’t handle bug fixes, obviously, or any emergency situations that can throw an entire Sprint’s progress in jeopardy. Transparency with stakeholders as well as an organized process is what Scrum encourages, so that everybody knows what’s happening. A single point of contact through the Product Owner is very helpful in deciding if a bug is important enough to be fixed immediately, or if it can be put on the backlog for the next Sprint.
  3. Scrum suggests no longer than 4 weeks for a sprint. Four weeks is really based on the fact that organizations work on a month-to-month basis for generating financial or other metrics. These are the metrics that help determine the value of our development efforts and are therefore crucial to be aligned with the outputs of these development efforts.
    A second answer that can be indicate the need for a philosophical shift in the development process is that you may need to look at delivering thin, vertical slices of functionality across all systems rather than dividing the efforts horizontally; i.e. a front-end team plus a back-end team working independently, but having deep dependencies.
    Vertical slices of functionality allow for more agile adaptation to change, as you should only be implementing as much as you need to deliver functionality, rather than building excessive in advance in hopes of utilizing it in future.
  4. Scrum should encompass the entirety of the product development process – “tossing the product over the wall” and having gated tasks beyond the sprint, such as testing, hardening, deployment, security testing, and so on violate the idea of having a product Increment that is ready for production. It also gives a wrong impression of the state of the product – if there are bugs or remaining quality checks to pass, then it is most definitely not done. The fix to this is to create a more stringent definition of Done for the sprint that includes the quality criteria, the testing, and other tasks that were previously outside the sprint.
    This will potentially impact the perception of what is achieved in a sprint, but it is much more truthful and transparent to the entire development process as a whole.
  5. Continuous Deployment is not forbidden by Scrum – the Sprint should be seen as a cadence or rhythm for ensuring the Scrum artifacts occur on a regular basis, rather than enforcing a fixed delivery window of the an all-mighty and encompassing Increment at the end. Each task could simply include in its Definition of Done a “deployed into production” checkbox.
    Continuous deployment is fantastic, and a great way to ensure value is delivered as soon as possible; why not deploy a feature the moment it is ready?

As you can see, it is sometimes the perception of what the rules mean that can be impediments to a great Scrum implementation. One needs to look deeper at the intention of Scrum, rather than the raw rules or literal interpretations of what each Scrum artifact implies.