Load-balanced SSL WordPress on AWS

Introduction

I recently deployed a load balanced production WordPress system on Amazon AWS. For the most part, the process was mostly time-consuming with the only major issue dealing with SSL.

First off, it is definitely viable to run a load-balanced WordPress infrastructure using all of the AWS infrastructure goodies and no expenses beyond the AWS charges. AWS Application load balancing works seamlessly, and as a bonus, is able to terminate SSL sessions, which eliminates the hassle of obtaining an SSL certificate (by purchasing one or deploying Let’s Encrypt certificates with auto-renewal cron jobs) for each instance.

WordPress is supposedly stateless, however, I did find in my situation that this is not always the case, and the system worked best consistently if the load balancer was put into sticky sessions mode.

AWS Load-balanced WordPress Setup

I won’t go particularly deep into the setup, other than to list the various high level steps. Various resources on the internet are available for each step, but nothing that is fully comprehensive. Each step brought about its own unique pain points:

  1. Install AWS Linux on EC2 instances
  2. Install PHP, Apache, memcached.
  3. Install FTP and dedicated FTP user in PASV (passive) mode, the only mode that works on EC2, with my FTP client of choice, Transmit for Mac. Filezilla had no problems with the initial configuration, but Transmit refused to connect without config changes to vsftpd. Another painful point.
  4. Configure EC2 security groups to allow specific TCP traffic to ports (80-http, 443-https, 22-ssh, 21-ftp, passive ftp range to match vsftpd, etc.)
  5. Configure RDS MySQL instance
  6. Configure RDS security group to allow TCP port 3306 traffic from EC2 security group
  7. Install WordPress conventionally
  8. Deploy WordPress site to the server using plugin
  9. Configure an EFS volume for the wp-content directory, copy the existing wp-content folder to the EFS volume, then mount the EFS volume in place of wp-content
  10. Create a Route 53 hosted zone for my domain. Create fairly low TTL settings (5-10 minutes) to allow for modifications
  11. Modify my existing domain provider’s DNS hosting record to point to Route 53’s servers for name resolution. This can take up to 48 hours so plan ahead.
  12. Create a new certificate for my domain using Certificate Manager
  13. Set up a new application load balancer to serve both HTTP and HTTPS traffic using the certificate and across 2 or more Availability Zones. Create a new target group for the load balancer for port 80 only
  14. Register the existing EC2 instance in the load balancer target group
  15. Configure Route 53 to set the domain’s A record to alias to the load balancer (you can point to the load balancer, not its IP address, which is handy)
  16. Configure SSL detection (below) on the WordPress installation
  17. Configure W3 Total Cache plugin to use memcached instead of disk caching, enable minification, and AWS CloudFront
  18. Server setup is done!
  19. Create AMI (machine image) from the existing instance so it can be used again
  20. Configure EC2 Auto Scaling and test creating new EC2 instances into the target group using a launch template that references the AMI
  21. Configure AWS WAF with available WordPress WAF rules
  22. Configure CloudWatch

For the initial round, you can consider the environment as individual standalone servers running off a common RDS MySQL instance and a shared wp-content EFS volume. No per-instance user variables are needed when starting up another load-balanced instance.

Other useful steps:

  1. Adjust directory and file ownership, group, and permissions to allow Apache and the FTP user to access files. Apache needs 755 (directory) and 644 (file) permissions. Fairly straightforward command line examples can be found; e.g.
chown apache:apache -R *
find . -type d -exec chmod 755 {} \;
find . -type f -exec chmod 644 {} \;

Shared wp-content volume – pros and cons

I am running a shared EFS volume for the wp-content folder. This primarily solves the issue of uploaded content for all servers, but also conveniently solves concerns around deploying plugins, theme changes or fixes to all servers. Content changes by editors and general server administration, with the exception of changes to the root directory (wp-config.php) and WordPress code, can be done as if working on a single server.

With convenience there is a potential downside for plugins or other code that may not have been written to deal with accessing files on a shared volume. As NFS volumes do not support file locking, there could be problems with code trying to write to shared files. One example is the popular WordFence security plugin, which introduces a feature to write to MySQL to get around this issue (which I didn’t get working). Other plugins may be less fortunate. I found entries in my Apache log pointing to file locking problems that increased as I increased the server load. Whether these actually resulted in any actual server issues, I was not able to determine.

The best approach is probably to look at isolating the wp-content/uploads folder as the common mount point and to keep each server’s plug-in folders separate.

SSL Detection – The Big Gotcha

SSL detection was the biggest issue I ran into on the site. The AWS load balancer handles the SSL connection termination and acts as a reverse proxy to send traffic to each target EC2 instance. This traffic is sent as HTTP, not HTTPS. This is why the load balancer instance targets are configured to listen on port 80.

SSL detection is important to WordPress because lack of consistency on whether to use SSL or not can cause mixed (secure SSL/https and non-secure http) content to be served from the WordPress. Secure HTML document content could contain non-secure references to include CSS files. Because modern browsers now forbid loading mixed content, this will cause websites to break.

On my site, the mixed content exhibited in strangely weird ways, such as third party Gutenberg blocks not showing all block options in the editor or subtle style differences. The site may not fully break because only some of the necessary files may be omitted, depending on the plug-in involved, and how it uses SSL detection. Some pages may look fine if they do not use the affected styles or Javascript references.

Detecting the mixed content can be deceptive, as no errors show up to normal users. The developer console in most browsers will show the error, however.

Underneath the WordPress covers, there is a function, is_ssl(), that returns true or false to the caller depending on whether the incoming request is using SSL. This function is used in many places, internally in WordPress and in plugins to help determine if the response back to the caller should be made in SSL mode.

The inability for is_ssl() to return consistent results is what causes mixed content to be returned to the caller, and the main cause is the load balancer sending requests to the server through port 80, masking the original connection’s SSL status.

I eventually found a solution to the SSL detection issue in two ways, using a plugin or using some code inserted into the wp-config.php.

Step 1: Fix the SSL detection.

Plugin Option: The SSL Insecure Content Fixer plugin (https://ssl.webaware.net.au/https-detection/) was able to fix this. The plugin accurately detected the best method out of several to use for SSL detection and suggested that I use it, which I did.

Code Options: AWS load balancer-specific code can be added to the wp-config.php file. This code detects the presence of the HTTP_X_FORWARDED_PROTO header, which is added to all requests by the AWS load balancer, and if present, sets the internal WordPress global _SERVER dictionary variable. This variable is then retrieved by the is_ssl() function to return true or false. This code is just two (one, really) lines of PHP:

if (strpos($_SERVER['HTTP_X_FORWARDED_PROTO'], 'https') !== false)
   $_SERVER['HTTPS']='on';

Step 2: Verify your site URL and Home URL contain the https:// prefix. The WordPress Site URL and Home URL, whether in the database, or in the wp-config.php, should be checked to make sure it has the https:// prefix. This will prevent recursive redirects from happening.

If you are trying to access the site that was previously non-SSL behind your newly configured SSL-enabled load balancer, it might not start properly, because the previous non-https: URLs settings are stored in the database. Luckily, this value can be overridden in wp-config.php, allowing the site to start properly. The following code does the trick:

define('WP_HOME','https://your.domain.here');
define('WP_SITEURL','https://your.domain.here');

IMPORTANT NOTE: Both chunks of code must be ABOVE the ‘/*That’s all, stop editing! Happy publishing. */’ comment line in wp-config.php or they will not accurately set up the environment prior to processing the request.

For interest’s sake, when I dissected the code for the Insecure Content Fixer plugin, I was happy to find that the AWS-specific option used the same code as the manual wp-config.php code fix above. The other options in the plugin help deal with reverse proxy strategies from other cloud providers or NginX/Apache combinations.

Sticky Sessions

Though WordPress is designed to be stateless in its base form, plugins or themes may require sticky sessions to be enabled on the load balancer, as was my case. In my case, I suspect that WP User Manager, the content restriction plugins I was using, or some related code, was static caching user role information from the database to use later to check if the current user was in a specific set of roles.

My site uses roles to serve specific content to different user types (e.g. regular attendee versus those that had access to master classes). A regular content site not relying on roles would probably not run into this problem and could run with sticky sessions off, which should help balance the load on the servers. Even so, I found that short sticky session duration (1-5 minutes) appeared to be sufficient to fix any session problems.

No Home Page Browser Caching

An optimization that was required in my case was to modify the cache-control headers for the site home page. Because this is a membership-enabled site, non-logged-in users are prompted to log in when they access any page, including the home page of the site. This is achieved through a 302 redirect fired by the content control plugin redirecting to a separate login page with a query string parameter containing the source page (in most cases the home page) to redirect back to when the login was successful.

Because this redirect occurred on the home page, this caused an issue with browser caching of the redirect – what would happen is that when a user logged in, and subsequently accessed the web site URL (e.g. by way of clicking on the web site logo), the cached redirect would open up the login screen again, leading to confusion.

One way to solve this is to put some form of nonce or ignored parameter as the logo link (e.g. http://your.domain.here?value=nnnn) where nnnn would be some random value that made the home page link unique and not pull the cached version with the redirect.

This method worked for me, but I decided to use an alternative more suited for this website, which was to discourage home page caching entirely on the browser. The reason for that is the home page is the current events page for the site, so content on there is most subject to change, and best to not be cached at all. The additional server load was not going to be an issue. I tried to configure W3 Total Cache for this, but was not able to find that level of control.

I found a way to do this by inserting an action for the modify_http_headers event into the theme’s functions.php file to modify the Cache-Control header for the home page to be “Cache-Control:no-store” if the requested page was the home page, and to be “Cache-Control:max-age=600” for other pages.

Note that the header “Cache-Control:no-cache” was also tested, but did not force the cache to be bypassed as required. The “no-store” option worked as intended for this scenario.

Test, Test, Test

I can’t overemphasize the value of load testing for peace of mind. Load Testing will give you an idea of infrastructure requirements to support the proposed volume. With cloud providers like AWS, Azure, and Google, there is no excuse for not being able to test assumptions on an actual environment, including load balancing, scale out/in, and failover.

AWS makes it quite painless to conduct experiments around scaling out servers, the type and size of database needed, the types of instances needed, etc. You can easily spool up 8-10 or more servers at minimal cost for testing sessions that may only last a half-hour or an hour at most. I was able to test against the production environment before going live, and feel comfortable doing that during low-volume hours.

I am also able to build out a full load-balanced environment for testing or staging with minimal fuss: the same AMIs used in production can be used for this test environment, simply configured to hit a different database by modifying the wp-config.php and adjusting the EFS mount point. This provides deployment consistency in the machine images. When done, the instances can be stopped or terminated entirely.

I wrote Locust scripts running on a separate EC2 instance in the same VPC to exercise the various pages on the production website, simulating loads caused by users logging in around the same time each day, and checking content. While this may not be fully representative of the traffic because static content requests are not part of the Locust scripts, the scripts still exercise the CPU- and database-intensive portions of the site, which are the initial login and content page requests. The extensive use of CloudFront to cache static files will also route these requests away from the servers.

I found that with extensive memory caching using memcached, a smallish database instance was sufficient for my needs – in this case, a T3.small RDS instance running MySQL was totally adequate, and I could probably go a size smaller. The caches are refreshed every 15 minutes and very little content needs to go live right away, except for the person testing the update, and a manual cache flush can be done for that server to check that the content is correct (a side benefit of having sticky sessions enabled).

Likewise, the volume requirements were handled by multiple T3.small instances as well. Testing showed that more smaller servers were better than fewer larger servers. AWS’s pricing makes little difference between X servers with Y capacity or 2X servers with Y/2 capacity.

High volume testing stressed the servers to capacity, allowing me to anticipate potential hotspots like the EFS shared volume. More file locking failure warning messages were found in logs at higher volumes, so this is an area of future research. What isn’t helpful is the utter lack of information in these warnings as to what’s causing them, and to whom.

Newspaper article processing with Azure Cognitive Services and Python

The challenge: process 100 years of newspaper articles related to Scottish associations in Vancouver to try to make sense of what each means.

My wife is currently working on her Master’s dissertation in family and local history and exploring the Scottish association culture in Vancouver. Much of the association history in the city spans from the 1880’s through the 1980’s, from the city’s founding in 1886, the rise of associations serving the needs of the Scottish diaspora, and the decline of these associations in the 60’s and beyond.

Although her research also includes association archive files and taped interviews, Newspapers.com’s searchable archive of newspapers proved to be a useful source for association activities. Searching by each association name retrieves articles related to its activities: meetings, concerts, dances, Burns Night suppers, fundraisers, and general business. Newspaper mentions are a good proxy for association health, due to the lack of other communication channels during the time period analyzed – newspapers were arguably the medium of choice to help bring in new members and to publicize events held by these associations.

The idea of manually going through the estimated 29,000 articles in this time period was much too arduous, so my wife had initially planned to just perform a broader summary of articles by date or source, and not dive into the details of each article. Just a simple count of articles by year shows the upward growth into the WWI / WWII period, then steady decline into the 80’s. The question was if there was more to be gained in understanding the data further?

Example thumbnail of an article

I had previously done some historical statistical analysis of census documents decades ago when working on my degree in university. In that, I used SPSS and manual data entry of 1890’s census data to look at typical occupations for Chinese in British Columbia.

I pitched the idea to her for a modernized approach using computing power to read through the results generated by newspapers.com to help categorize each search result. All using a wonderful mashup of technology, from Azure, local MySQL, .NET, C#, Excel, Jupyter, and Python.

This is what I did:

Step 1: Collect newspaper images into CSV

The initial step was conducted through Newspapers.com and a Chrome browser-based web scraping tool. A query was executed through newspapers.com’s search interface for each organization of interest, and the search results were scraped into a CSV file. The CSV contained the query, association name, newspaper name, and date, and URL of the article thumbnail graphic.

Inquiries were made to Newspapers.com to see if queries could be executed directly without needing to scrape pages, but their support staff were not able to provide us with anything. It would have been beneficial (and perhaps a good service offering – hint) for Newspapers.com to have features for researchers such as a REST API for obtaining results or fuller views of the newspaper pages, as this would have saved us tremendous amounts of time and gained greater accuracy. It would certainly be a feature we would have gladly paid for above and beyond our existing paid subscription.

The dozens of resulting CSV files were assembled together into one large CSV file.

Step 2: Get around Excel date limitations

A well-known and annoying problem with Excel for historians is its inability to work with dates before 1900. Cells cannot be formatted as a date type as a result and any date math does not work. Knowing that Excel would be the analysis tool used, the workaround was to write an Excel macro to parse the date returned from newspapers.com (e.g. “Thursday, October 01, 1914 -“) into discrete year, month, day, and day of week columns within the spreadsheet.

Step 3. Load MySQL database

The Excel file was imported into a local MySQL database on my Mac resulting in 29,237 rows.

Step 4: Azure text recognition

Next was investigating text recognition on the thumbnail graphic. Microsoft offered the ability to perform text recognition through Azure Cognitive Services Computer Vision. A quick search through Microsoft’s samples found me what I was looking for: .NET samples for uploading an image for processing, calling the API, and receiving back a result containing the text. All that was needed was to wrap that with database access code. As I still had some remaining credit on my Azure account, Azure got the nod for this part.

Using Visual Studio, I wrote a quick and dirty C# console program that would loop through the records in the MySQL database using Entity Framework, download the image thumbnail JPEG file from newspapers.com to the machine, submit the image to Azure Cognitive Services’s text recognition API, and save the recognition results back to the MySQL database.

As the sequence of operations was run in a single thread, the total recognition phase took around 7-10 seconds per row, or a total of three days to complete for the 29,000+ rows. Running multiple worker threads would have sped up the process, but I was concerned about running up costs if there were any bugs in my implementation leading to reprocessing rows by mistake, so I erred on the side of watching the system closely.

The initial testing of the application was thankfully possible at the free service level (limit of 5,000 calls / month), and running of large batches for the remainder was done at the paid service level.

The total cost for the Azure services consumed was about $69 Canadian and yielded results like the following:

H. N. Tait, president, wil
chairman, and N. K. Wright
he master of ceremonies of
annual Glasgow and Dis
Association's Burns' banquet
dance on Friday at 6:45 p.m
Hotel Georgia.

Analysis of data using MySQL queries yielded good insight finally into the contents of the articles, which were finally visible, and some useful grouping and filtering could be done. Certainly, categorization using SQL queries such as

select count(*) from articles where content like '%dance%'

would yield useful results, but the limitations of the simple string matching were more apparent when the actual contents were reviewed.

Step 5: Data analysis with Pandas and Jupyter

In order to make the analysis and categorization step easier, I wanted to work with Python’s data science friendly tools, including NumPy and Pandas. The data in the MySQL database was exported into Excel format and imported into a Pandas DataFrame inside the Jupyter notebook for more interactive analysis.

Jupyter conveniently allows the writing of Python analysis code and viewing of results in a single document, rather than jumping between a code environment and viewing environment. The idea was to essentially work with the data in 2-D columns and rows, and augment it with analysis results in additional columns.

The processed dataframe is then exported in Excel spreadsheet format. This new spreadsheet appears identical to the input spreadsheet, with the addition of new columns signifying the probability that row contains text terms related to that column’s category (“dance”, “fundraising”, etc.).

Part of the Jupyter notebook showing the categorization function and related calls
The Pandas dataframe as viewed in Jupyter with added category columns and match scores

Step 6: Categorize text based on string matching

The next step is to tag each article into different categories such as “fundraising”, “entertainment”, or “meeting” based upon the text content and analysis required.

One challenge was the nature of the returned text was not perfect; in many cases the newspapers.com thumbnail only contained a fragment of the article, truncating many words, or incorporating words from other articles in the paper.

In most cases, the recognition performed well when the thumbnail contained good data:

H. N. Tait, president, wil
chairman, and N. K. Wright
he master of ceremonies of
annual Glasgow and Dis
Association's Burns' banquet
dance on Friday at 6:45 p.m
Hotel Georgia.

In other cases where the thumbnail contained multiple columns, Azure joined unrelated columns, truncated words, and identified words incorrectly:

est she can- lumbia "sorters the me rs of the Gaolid up pape angements for conditio to be held in Press he evening of been oh

The first guess was to determine the category of the article based on key terms. Scanning all 29,000 rows of text showed words like “donate” or “dance” showed up frequently and matched the intended statistical analysis of popular association activities like fundraising and dancing. An example of rows of text containing the word “donate” are the following:

centre donated by ter raffled in aid of du the gaelic society abl et no. 262. the su to $42.90. wo fes the 

iss maude avery. at the last business lottish ladies societ 25 was donated to rder of nurses. arra made for a picnic to

r the sunshine fund o d caledonian society & articles donated by the ld in the moose hall or ock. the auction will

vement in the death mother. gaelic society ank those who foun d at funeral service donated flowers or

In this case, the word “donated” is quite clear in the text. Other words found in the overall text results related to fundraising were “rummage sale”, “bursary”, “scholarship”, and these were also added into the dictionary to tag any occurrences under fundraising.

Note that in the last example, “donated” refers to the flowers and the article intent is not fundraising, so there will be inevitable miscategorizations when purely tagging based on the presence of text in the article rather than doing a deeper lexical analysis.

A single article can be tagged under multiple categories depending on the search terms, like the following winner, which hit on the dancing, fundraising, entertainment, memorial, hall, and competition categories:

trinity holds rummage sale at 8:30 a.m. in the Memorial Hall, 514 Car- narvon, New Westminster. Highland Society concert at 8 p.m. in Scottish Audito rium, Twelfth and Fir, fea tures Scottish and Gaelic sing ing piping and dancing.

Step 7: Refining the fuzzy text matching algorithm

To obtain broader categorization beyond exact string matches, a fuzzy string matching function was implemented to cover three possibilities when searching for a term inside the recognized text:

  1. Complete search term found – highest match score (150) using regex
  2. Truncated search term (missing one character at the beginning or end of string and preceded by either a space or by the beginning of the line) – next highest match score (120) using regex
  3. Search term found in fuzzy search. The Levenshtein string distance algorithm (implemented in the Python fuzzywuzzy module), capable of matching phrases – match score 100 and below

The match score for the string was then compared to a manually-derived threshold ratio, above which a match was declared. A default match score of 85 was found to be suitable for most search terms. This match score was then saved to the dataframe column for that category to allow room for further analysis as opposed to saving a binary True/False value.

A tighter match threshold was needed for smaller words whose truncated or fuzzy search versions could trigger false matches. The search term “band” (i.e. pipe band) is an example where a tighter match threshold of 150 is needed, as its fuzzy derivative “and” matched many unrelated articles.

The resulting function is called searchandClassify(term, category, level), taking in parameters for the search term, the category to score for a match, and the threshold level for match (defaulting to 85 if not specified).

Each search term is checked by calling the searchandClassify() function. Currently, each term is manually hardcoded to obtain the best results while eliminating false positives by fine-tuning the level parameter.

Examples of the analysis code that calls searchandClassify() are as follows:

...
searchAndClassify('concert', 'isentertainment')
searchAndClassify('donated', 'isfundraising')
searchAndClassify('amount', 'isfundraising', 150)  # needs exact match to skip "mount" in "mount pleasant"
searchAndClassify('red cross workers', 'ismeeting', 80) # SDL
...

Fuzzy text searching using the Levenshtein distance can be useful in categorization situations where a phrase is applicable. This can help sidestep limitations of single word phrases being too broad to categorize. In the above example, “red cross workers” as a term can help select for that phrase and limited variants, where “workers” may be too broad.

Step 8: And finally, back to Excel

As Excel was a much friendlier end-user tool, the Pandas dataframe was exported into a final Excel spreadsheet for further analysis and creation of visualizations.

Future Steps: Machine Learning

An interesting avenue for improvement of results came up through the analysis of the text data. Categorizing articles purely on the presence of words can be quite successful in this context, but outliers like “donate” present an interpretation problem. As well, the unfortunate combination of text across unrelated columns (as a result of the OCR process) has the effect of arbitrarily splitting up key phrases that can be used to categorize articles.

There are some cross-cutting terms that cannot be successfully categorized, or appear in articles with different intent. These words are “hall”, the aforementioned “donate”, “drive”, and some others.

A possible way of solving this is to break out individual terms of interest without categorization, then apply a probabilistic matching based upon the presence of a list of words.

This of course is the realm of machine learning, and such an algorithm would only require straight text matching (level 1 and 2) to assemble the list of terms found in the article prior to analysis.

Depending on available time and resources, this may be an avenue worth pursuing, though the benefits in this particular dataset may not be worthwhile because most terms found uniquely map 1:1 to their categories. As well, with all ML approaches, a training dataset would need to be assembled, the effort to do so may not be worthwhile in this case.

Resolving WordPress “The response is not a valid JSON response” error

Recently I ran into the following dreaded error when trying to save a Block Editor’ed post:

the response is not a valid JSON response

I had just started testing the Gutenberg block editor on the site, and one of the often-mentioned troubleshooting steps was to disable plugins (All WordPress troubleshooting seems to eventually boil down to disabling some plugin…) or disabling the block editor entirely and using the classic editor.

Disabling the block editor did resolve the problem, and in fact was what I’d done since WP 5 came out since the error existed way back then, but I was determined to finally implement the block editor on this site.

I highly suspected the GT3 PageBuilder that was part of the theme might be the issue, so I disabled the GT3 PageBuilder plugin, and lo and behold, the block editor was able to save the post the next time. Problem solved, but then I lose the PageBuilder content on my site, which I wanted to migrate slowly over to blocks.

Around the same time, I was working with the WordPress REST API for a side project, running queries like

http://site:8888/wp-json/wp/v2/posts

Which would return a response containing properly-formatted JSON array of all posts on the site:

[{"id":3048,"date":"2020-07-07T09:50:54","date_gmt":"2020-07-07T17:50:54","guid":{"rendered":"http:\/\/site:8888\/?p=3048"},"modified":"2020-07-07T11:43:32","modified_gmt":"2020-07-07T19:43:32","slug":"block-editor-draft","status":"publish","type":"post","link":"http:\/\/site:8888\/block-editor-draft\/","title":{"rendered":"Block Editor Draft"},"content":{"rendered":"\n<p>Another block editor post 3456<\/p>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container\">\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container\">\n<p>Test<\/p>\n\n\n\n<p>Test<\/p>\n\n\n\n<p>Test<\/p>\n\n\n\n<div class=\"wp-block-media-text alignwide is-stacked-on-mobile is-vertically-aligned-top\"><figure class=\"wp-block-media-text__media\"><img src=\"http:\/\/siw ....rest of JSON content....

Some Luck

Through some luck, I had the GT3 PageBuilder reactivated when I ran the REST API query again, and this time, the REST response contained additional, mysterious script content (highlighted in red) in the beginning, followed by the regular JSON:

        <script type="text/javascript">
            var custom_css = "";

            jQuery("#custom_css_cont").prepend(custom_css);
        </script>
    
        <script type="text/javascript">
            var custom_css = "";

            jQuery("#custom_css_cont").prepend(custom_css);
        </script>
    
        <script type="text/javascript">
            var custom_css = "";

            jQuery("#custom_css_cont").prepend(custom_css);
        </script>
    
        <script type="text/javascript">
            var custom_css = "";

            jQuery("#custom_css_cont").prepend(custom_css);
        </script>
    [{"id":3048,"date":"2020-07-07T09:50:54","date_gmt":"2020-07-07T17:50:54","guid":{"rendered":"http:\/\/site:8888\/?p=3048"},"modified":"2020-07-07T11:43:32","modified_gmt":"2020-07-07T19:43:32","slug":"block-editor-draft","status":"publish","type":"post","link":"http:\/\/site:8888\/block-editor-draft\/","title":{"rendered":"Block Editor Draft"},"content":{"rendered":"\n<p>Another block editor post 3456<\/p>\n\n\n\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-container\">\n<div class=\"wp-block-group\"><div class=\"wp-block-group__inner-
....rest of JSON content....

It struck me at that point that the <script> tags were being emitted by the GT3 PageBuilder plugin, and was the cause of the JSON being corrupted.

Problem Found!

I ran a quick search for the “custom_css_cont” text within my project files and found the issue: there was a content filter added to create the page builder content, but the filter was eventually responsible for echoing the script content into the REST API output ahead of the JSON:

function gt3_custom_paddings_in_all_modules($custom_css)
{
echo '
<script type="text/javascript">
var custom_css = "' . $custom_css . '";

jQuery("#custom_css_cont").prepend(custom_css);
</script>
';
}

To fix this, I simply commented out the echo statement. Whether this has any material impact to my content isn’t clear yet, but I don’t use any custom CSS in the PageBuilder blocks and I haven’t noticed any other problems, so I’m good with that.

Interestingly enough, the saving of the changed content seems to occur fine in my case even when the JSON error occurs, so I suspect the error may be occurring when the editor reloads the post content via JSON after saving the post.

More evidence to support this is that when loading the initial page for editing (when clicking in the post in the Dashboard), the JSON content is part of the page content; that is, it is not being loaded dynamically by the block editor at that time.

As a separate sanity check, I inspected the editor’s network traffic in my browser’s Developer Tools, and the problematic content was indeed present in the JSON response to the editor save.

So now I have the best of both worlds – the block editor is running fine and the problem in the GT3 Page Builder is bypassed. Since I plan to retire the Page Builder in favour of pure WP Blocks, this is a solution that I can live with and I can transition PageBuilder content over at my own pace.

Conclusion

A potentially handy way to troubleshoot the “the response is not a valid JSON response” error might be to run a query on the WordPress REST API to see if something is affecting the JSON generated by the API.

This information can then be used to find the offending plugin by providing some hints as to where the problem lies, whether just at the plugin level, or at the specific lines of code that can then be modified.

In my case, code from the PageBuilder was deliberately being output into the response, but it would be equally likely that debug messages inadvertently written into code, or compatibility issues caused by updates to PHP may also trigger this behaviour.

There are other solutions for this error that can be found via searching; these include changing the permalink structure and addressing mixed (SSL/ non-SSL) content.

3D Printing: My Jumpstart

During the Time of the Virus, I decided to jump into 3D printing. I’d been looking at printers for a while, probably since 2018, when I had a colleague who talked about 3D printing the odd part for his cars.

Finally, this March, I pulled the trigger on the Ender 3 Pro, from Amazon. Mostly because it was the only one available in a few days (at that time, I also looked at another one, but it rapidly became unavailable during the supply chain crisis that affected all retailers). However, I did find a lot of supporting good reviews that tipped the scales for me in favour of the Ender. The price was great as well, for around $350 CDN.

The Ender comes mostly disassembled, and part of the fun and learning is actually putting it together. Assembly included plugging in the wiring harnesses into the steppers, putting together the rails and belts for the X-axis. The more delicate pieces such as the base and Y-axis, the electronics are all pre-assembled.

It took about an hour, with some help, to do so, and the instructions are fairly decent, with the fasteners in separate labeled bags. More impressively, it comes with some decent accessories, like a scraper, side cutters, nozzle cleaners, extra extruder nozzles.

At the same time, I’d purchased the aluminum extruder upgrade, which I found to be unnecessary, since the Ender 3 Pro version has the extruder upgrade anyway, but this regular Ender 3 would likely benefit from this.

The first print I tried after assembling the Ender was one of the samples on the included micro SD card using Amazon’s AMZ3D PLA filament, and it came out impressively well. Downloading other samples from Thingiverse also yielded really good results as well.

The Ender 3 Pro comes with a textured flexible magnetic build mat that works well for PLA adhesion, and lets you remove and flex the mat to get the printed pieces off.

Useful Upgrades

I’ve added / changed the following:

  • Printed filament guide
  • Printed wiring clips to guide the ribbon display cable
  • Printed a back cover for the display
  • Silent mainboard upgrade (makes a huge difference in stepper noise)
  • Glass bed for printing ABS

Calibration

The Ender 3 prints really well out of the box, and would work great for creating standalone items (vases, pots, etc.) that don’t need to fit into other existing parts accurately. I went through some extensive calibration steps to try to print parts that require dimensional accuracy like clips and accessories for lights or cameras. I had to do the following:

  • Calibrate X, Y, and Z steps
  • Calibrate extruder feed rate. The stock setting extruded less filament than expected (i.e. when commanded to extrude 10mm, it would extrude 9.5mm), so this was adjusted, resulting in having to dial down Cura flow settings to about 90% to compensate.
  • Find the right retraction settings to reduce stringing
  • Find the ideal setting for line width for Cura to get the right wall width. This is a combination of line width and extruder flow

Dimensional Accuracy: The Challenge

What I’ve struggled with for dimensional accuracy is wall width. When designing parts, the wall width squishes out horizontally during extrusion, causing the dimensions to be affected by twice this squish amount. The key is to accept this amount of squish and have it factored into the slicer calculations. The problems is that some amount of squish seems to help with the wall layers adhering to each other, so getting great dimensional accuracy may come with sacrifices in part strength.

Another factor where wall width comes into play is in the X and Y printer calibration. I had calibrated the printer using a 20mm cube, but hadn’t taken wall width into account. Since wall width is part of the

Another approach is to just accept the printer characteristics and adjust the design’s dimensions (e.g. shrink the measurements to reflect the output). I’d rather not do this, but the reality with dealing with materials may require having to do so, at least to some extent).

So far, I’ve been working with Cura’s line width setting at 0.5mm for a 0.4mm nozzle, which helps to factor in the squish factor (normal guidelines suggest 1.2 to 1.25 x nozzle size, so 0.5mm for 0.4mm nozzle fits nicely).

Given what I’ve learned, my sequence of calibration might go in a different order, namely:

  • Calibrate extruder feed rate
  • Slicer flow rate calibration in conjunction with wall line width calibration
  • X, Y, and Z calibration. Note that Z calibration isn’t affected by wall line width, so it could be done almost any time.

Printer Pros

Good price and lots of support on websites. Just being able to read and absorb articles that are Ender 3 specific is a fantastic learning experience, so in that regard it has been a really good starter 3D printer for me. Having to put it together helped in my understanding and comfort in swapping out parts, and just overall how the printer works.

Prints well right off the bat with PLA, easy parts removal (with Pro version magnetic bed).

Printer Cons

Somewhat noisy – this can be remedied through the silent mainboard upgrade and the fans can be changed out, but it would have been nice to have the silent mainboard as part of it.

Need to add more bits to it. It might have been good to have more of the upgrades as part of the package, but on the other hand, it would have increased the price, and wouldn’t have been as much of a learning experience for me.

Verdict: Excellent printer for the price. Really good starter 3D printer

Developing for newer versions of iOS using an older version of Xcode

I recently ran into a situation where I wanted to hold off upgrading to MacOS 10.15 (Catalina) but needed to develop and test on my iPhone XS running iOS 13.5, which requires Xcode 10.4 that only runs on Catalina.

When running an app on the iPhone (rather than the simulator) in this situation, Xcode displayed an error, saying it could not locate device support files, a spinner, and only an option to Cancel Running:

Rather than updating Xcode, the solution is to install the device support files. The support files go into the following folder:

/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform

(hint: To open the Xcode.app package in Finder, right-click on Xcode.app, and Show Package Contents in the context menu)

In my case, the Xcode 11.3.1 iPhoneOS.platform folder contained subfolders containing support files for all iOS versions from version 8.0 up to 13.2.

To get the right device support file, you can download the latest Xcode package from developer.apple.com in .XIP format, extract the contents, resulting in an Xcode.app package, and open the Xcode.app package contents. Copy the support folders you need into the above folder on your computer. In my case, I copied the 13.3, 13.4, and 13.5 folders from Xcode 11.4.1 into my 11.3.1 App .

Attach the new device, restart Xcode, build the project (it may take some time to configure the device debugging files), then run!

Building an Agile Organization from the top down

The Scrum Team stuff is Easy (Sort Of)

Scrum has its roots in team empowerment. From the very beginning, teams are told to make their own decisions – “let the team decide what to do” is the oft-repeated mantra when a team faces a challenge.

These ideas can seem foreign to teams that have been told exactly what to do and how to do it. While the route to becoming a truly high performing agile team will take time and effort, much rests on the team’s shoulders and its ability to constantly improve.

Much has been written about the Scrum team, on techniques, tools, and processes. However, a successful agile transformation outcome also rests in properly educating and engaging the senior decision makers in an organization.

Don’t forget about Stakeholders

It’s easy to transition development teams to take on more agility, but if stakeholders, who are often the senior leadership roles in the organization may get left behind or ignored while the rest of the organization shifts.

To many, the idea of an agile business makes sense – react to change, get stuff done quicker, etc. But few know what implementing and living with an agile process looks like, and are likely unprepared for the amount of extra work this may involve.

Stakeholders are often stuck with traditional business metrics, demand cycles, shareholder reporting, thus need the answer to basic questions like “when can I get this?”. They are not prepared to be answer the question, “what set of functionality would you like to see in 2 weeks?”.

This leaves them in a potentially awkward spot. Since the rest of the world outside of the organization is going to be deadline-driven, there can be a significant communication and expectation gap between the development team running in an agile fashion, and the stakeholders not.

Don’t Bother Me with the Details

The stakeholders often just want things done by a certain date. Good agile teams get very specific about what done means, but that does not match the stakeholder’s view of doneness, which is when I can actually start using it. They don’t care about the journey, just the destination, whereas the agile team is constantly plotting its journey, re-evaluating, and re-planning, open to constant feedback.

Stakeholders just want to be able to set the destination and to not get bothered until it’s done, thanks. The whole reviewing of partially done items and having to provide feedback after every sprint seems like an imposition. Worse is answering the above question of what limited functionality they want to see at the end of the sprint when they already have a vision of the whole system in mind.

Stakeholders stay in the traditional mindset, expecting huge returns from some abstract corporate agility initiative, and are blindsided by increased expectations of their time, and see what may seem like slow progress towards a goal. Their expectations were of projects having a single kickoff meeting, followed by months of development work, an occasional status meeting, inevitable delays, and delivery of some sort of system at the end that sort of works. Instead, they get involved in much more regular touchpoints, probably fielding questions daily, and having to assess works-in-progress and provide feedback to the team.

But at least that old mindset didn’t really involve them too much and isn’t that how software works, anyway?

Reality Check

This has been the largest dynamic change I’ve encountered in my experience. A company may like the sound of agile, but often leadership does not invest the time in learning what this means to the organization structure.

An agile team’s greater transparency about their progress and their need for better communications can get shut down by upset or confused stakeholders very quickly with damaging consequences to morale and collateral damage to the organization’s agile initiatives.

Adequately assessing the overall organizational level of maturity and knowledge around what agile means is essential when implementing agile. Here are some reasons:

  • Agile is a different mindset. The mindset is to be eternally curious, dynamic, and creative. There are often no easy answers to a problem, and the only answer may be to “just try our best guess at it” and see how it works out. Agile is about giving up control, at least in the traditional org chart sense (But was there really any control, or just an illusion of control?)
  • An agile organization is made up of interconnected, small teams working together to deliver value quickly. These teams work towards a common goal. Agile teams continuously refine what they do to get better, and are supported by leadership to keep reinforcing this culture
  • Agile organizational goals are often stated as customer goals, as meeting the customer’s unmet needs is what will help define the activities the organization will undertake. Leadership must ensure that teams are focused on these goals to eliminate wasted effort. Focusing on the customer also allows teams to deliver exactly what is needed, and no more.

Laying down these baseline philosophies across the organization, creating partnerships between stakeholders and teams, then living them on a day-to-day basis is what will help create success in an agile transformation.

Moving from User Story to Product Backlog Item

(Originally written in 2018, but still relevant today)

It’s become somewhat fashionable to gather requirements in User Story format to put into a product backlog. Pioneered some time ago in the early days of Agile, Extreme Programming, and other uprisings against the Great Gathering of Requirements up Front before starting work, it had a purpose that seems to have been lost in a few organizations.

Simply put, the expression of a requirement through the standard User Story form of “As an X, I want Y so that Z” was meant as a starting point, not an ending point. I’ll return to that point later, but I wanted to write about how I’ve seen them used (or abused) through a blind devotion to this form. I recently walked past an entire whiteboard full of Post-Its, all in the User Story form, at a client’s office.

The challenge to any development team has always been about taking some client need and translating that into a product. That client need may be well- or poorly-defined. It may be devoid of technical jargon, or full of it — it doesn’t, and shouldn’t matter to the development team. Identifying the existence of a need, and having a process where this need can ultimately be fleshed out and realized, is simply all there needs to be. At least in an agile environment.

The misuse of User Stories as a way to express requirements can be problematic in many ways:

1. User Stories can be too granular. I’ve seen individual User Stories for an administrator to add a product, edit a product, delete a product, and so on. Each discrete function was expressed as a User Story, ballooning the number of user stories when maybe all there needs to be is an overarching “As an administrator, I would like to manage products” User Story.
2. Excessive granularity can take away from taking a more holistic view of the requirements. By individually specifying each step of the journey, it’s hard not to look up and around you rather than focusing down at your feet; spotting the right destination may get a lot harder, reducing agility.
3. User Stories can cross into specifying the “how” to do something (the realm of the Development Team), rather than the “what” or “why” (the realm of the Business / Customer).
4. Innovation and creativity could get stifled or limited by the way the User Story is written. Even without specifying UI or system design, the User Stories could subtly influence the way the Development Team builds the system, rather than taking a more agile and open approach. Using technical concepts that could translate into programming concepts may reinforce this tendency, such as specifying a desired user workflow from one page to the next.

A User Story that is not appropriate to be worked on can devolve into confusing or heated planning meetings with all parties struggling to come to a consensus on how to implement it.

There are several approaches to solve this problem:

First, understand that in Scrum, the Product Backlog can be composed of anything, tasks, bugs, and yes, User Stories. Free your mind that Product Backlog Item == User Story. (Ignore the fact that JIRA calls its main trackable units of work User Stories, so they have to answer for this).

Next, figure out how you want to use User Stories.

1: Find a way to satisfy the development team’s definitions of what they need when they start building something. They might not need much if they are an agile team and the business has subscribed to being agile (in this case, starting work with just enough requirements to get going, and having the attitude to figure things out along the way).

2: Recast the role of the User Story as a conversation starter for capturing ideas. This helps to frame the User Story as something that is valuable for the business to help initiate something in the development team. The development team can also understand the User Story because of the simplicity, and can possibly start imagining how it might be implemented.

Applying the 3 C’s Model (Card, Conversation, Confirmation) can help with establishing the contents of a User Story. In fact, very little needs to be there to initiate the conversation.

Lightening the load of the User Story makes it easier for whomever is writing it out, and gives much more leeway for the development team to start asking questions and moving towards being able to estimate effort.

3: Use detailed User Stories as additional documentation to help capture a solid list of requirements for developers. Encourage developers to look at the repository of User Stories for additional insight or context information. Examples of usage, especially when there are detailed calculations that are performed by the software, are fantastic in the user story. This gets the whole team towards Specification by Example, or Behaviour-Driven Development processes that are ways to work with the stakeholders.

Express User Stories in business-speak, in broader terms.

Applying this to Sprint Planning and the Product Backlog

Warning:

Some tools even reinforce this User Story centric point of view; JIRA, for example, makes the User Story the topmost estimatable item in the backlog. Tasks belonging to a parent user story can’t be estimated, an annoying limitation.

How to be a rockstar designer

Web design is increasingly becoming something that can’t exist in isolation without knowledge of good user interface design guidelines and working well with developer types.

I’ve noted a difference between designers (and their designs) who have started out in print first versus those who start out in web. You can span both successfully, but have to be aware that these are two different things entirely.

People read and skim quickly on the web. They are normally there to try to achieve some sort of goal, be it buying something, or learning how to do something, or getting the latest news, then move on. The web is meant to be efficient — if I could distill it down to a single word. Any deterrent, however slight, from achieving this goal causes frustration.

Printed material is often something voluntarily flipped through at leisure, beverage in hand, or otherwise. A newspaper or magazine article, a glossy brochure filled with aspirational images and detailed specifications of your next apartment/car/SUV, or a flyer, filled with hundreds of on-sale items.

Print material is rarely judged on its ability to convert lookers into buyers; it’s just too impractical. Yes, you can ask people to mention where they saw your ad, or using a coupon, or devise some way to count success, but it’s just hard. So customer acceptance of a design is rarely measured, so its approval comes from some people from marketing in a room, the reputation of the designer, and other fuzzy factors.

On the other hand, successfully working with web takes a designer who is devoid of massive ego, and accepting of one fact – other factors drive the acceptance of your design, not you. These factors are pretty important ones: your customers (potential and existing), search engines, and developers.

Where this affects design is in some pretty core areas:

  1. Functional and obvious outweighs glitzy and abstract on the web. Remember the old days of home pages with a huge welcome graphic (often animated in Flash) and some paltry, tiny text to go to the next page where the actual website started? These are gone, for good reason. Weird, abstract, making-people-have-to-think-about-it stuff is bad.
  2.  Simplicity is back. For the most part, this is in response to Responsive Design and its ‘blocking’ of design elements to flow nicely regardless of device and making things like images and text bigger. A lot of it is also a realization of good UI/UX in terms of paring copy down to the key points by understanding how people read on the web. I think the pendulum may have swung too far toward simplicity, so it may swing back.
  3. Text is good on the web. Gone are weird buzzwords and phrases, hopefully replaced with relevant text that tells you what this website is about. Simple is good, but too simple is bad. Just enough, and relevant is the sweet spot.

If you came from a print background, some of the above would actually be counter to what you may have done before.

As a web designer, you have to design for the first impression.

As a print designer, you don’t. Unless you’re designing for a direct mail campaign, someone likely gave your print brochure to your potential customer, or he/she picked it up. The customer therefore knows a little about the product or service you’re trying to sell, and the company behind it. They may be looking for a little bit more information to make that final purchase decision or to gain a spouse’s blessing to buy it.

A web designer has to assume the person visiting the website has never been there before. The designer has to (figuratively) beat the visitor over their head with obviousness and other signals (called “information signals”) such as repeating what the website is about, what the company does, and to start building trust. Miss out on this, and the customer will either go elsewhere on their search results list, or they will start off with a slightly confused or annoyed state of mind when dealing with your site. Neither is good – first impressions count.

Guidelines:

1. Literal is gold for the web

I was once chided by a designer who once said that having a hero image of a go-kart on the home page of a go-kart track website was too literal and that maybe it should have a picture of a dog (the track’s mascot) instead.

After picking my jaw up from the floor, I explained that perhaps Boeing has airplanes on its home page (thankfully they did), and Ford has cars on its home page, and they didn’t seem to feel that these were too literal.

2. Know your target (media) and design for it

Switch your thought patterns. Use the same branding guidelines, the same images, typography, etc. to achieve consistency between your different channels, but embrace the differences and utilize their advantages. Think how different a TV ad for a company is from a print ad, and how companies that do both achieve consistency (say someone big like Coke). The web is the same.

3. Check your ego at the door

Expect review and criticism of your design from UI experts, conversion specialists, and others who pick apart websites for a living to make it sell more. You’re here to help your employer / client who’s paying your bills to make money, not to promote your own portfolio.

4. Engage with other team members

If you are splitting development duties with developers, learn something about their development environment, and what they need to build these pages (some teams might want HTML, CSS, and images, some are fine building pages on their own and just need good images or even just your PSD files to slice on their own). Understand that some design elements may be really hard to achieve so compromise may have to be struck between complete fidelity with the original design and one that is just as good and takes 20% of the time.

5. Be brave

Let your baby (design) go through usability testing and other review processes to see if it responds well with customers. Savour the gratification that comes from a great design that’s proven to work well with customers.

I’ve mentioned ego several times in this post. It’s an unfortunate yet accepted fact that good designers will push the creative envelopes and feel as if they earn their right to get all prima donna with any critique of their design. It’s a natural defensive mechanism, yet it’s a completely destructive one when you need to work as a team to benefit your client. Avoid designers like these like the plague.

Configuring Nginx to reverse proxy .NET Core on Mac

Moving on from developing .NET using Visual Studio Community Mac, I started working on the necessary configuration for actually running on a Mac host (and by extension, a non-Windows host).

The main task here is to configure Nginx to assist Kestrel (the .NET Core web server). Kestrel currently excels are running the .NET Core application but is not fully-featured enough to do a good job with other things, like security and assets like images, hence the requirement for using two web servers.

The setup for Nginx to run in this reverse-proxy configuration is pretty straightforward.

MAMP configuration

I use MAMP to package up Apache, MySQL, and Nginx for development work, and it helps keep things fairly painless to configure. However, the basic configuration for MAMP does not allow Nginx to reverse proxy a custom address and port, which is what’s needed to direct requests to Kestrel.

Configuration Steps

The solution is to edit the template that MAMP uses to actually generate the nginx.conf file. Here are the steps:

1. In the MAMP menu, select File | Edit Template > Nginx (nginx.conf). This opens up the template file used to actually generate the nginx.conf. You’ll see various predefined macros in red.

2. Add a section to the template file and leave the rest alone – I chose to add a new “server { }” definition inside the “http {” section. and above the first existing “server { }”  definition. This adds a new listener on port 9999 and passes on all queries to Kestrel listening on port 5000.

server {
  #listen over http on port 9999 on localhost
  listen 9999;
  server_name localhost;
  #Kestrel server
  location / {
    proxy_pass http://127.0.0.1:5000/;
  }
}

3. Start (or restart) Nginx through MAMP and all should be well. The actual nginx.conf file generated by MAMP using the above template can be found in /Library/Application Support/appsolute/MAMP PRO/conf/. It may help to double check this config file to make sure your changes are correct and being seen by Nginx.

To run your solution, simply run Kestrel and your .NET Core app through Visual Studio, or by using the dotnet run command line instruction inside your project folder.

Nginx will now proxy all calls to Kestrel. Open up your site on  localhost:9999

There are further optimizations you can add to the nginx.conf file. Foremost is to configure static file requests to be handled by Nginx rather than Kestrel for performance reasons.

 

Visual Studio 2017, .NET Core, MVC and EF on Mac

I’ve started playing with Visual Studio 2017 on the Mac, pulling across a sample MVC Core + Entity Framework Core tutorial project on the Microsoft site. The tutorial can be found here.

I run a Windows in a VirtualBox VM on the Mac, so I have a full Windows 10 + VS 2017 Community install with MSSQL 2016 on it.

On the Mac side, I was trying the latest VS 2017 Community for Mac, along with ASP.NET Core.

I was hoping to suss out how much cross-platform compatibility there was, and how much of a poor cousin VSMac was compared to its Windows counterpart, which is still one of my favourite IDEs.

Installing

Installing VSMac is pretty straightforward and .NET Core is a separate install documented on the Microsoft website.

Moving the project over to the Mac side

I started and finished the project on the Windows VM, running on MS Sql Server. This project opens up fine on VSMac. I had some high expectations it would, and it does. It even builds and runs!

Of course, the lack of database access on the Mac side needed to be addressed, but the web project hosted inside VSMac started up and ran just fine in Safari using localhost and a custom port number just as in the Windows VM and Edge.

Converting to use MySQL instead of SQL Server would be the next challenge.

Adding Pomelo for MySQL Support

I didn’t find too many options for MySQL. There are some hints on the MySQL blog that MySQL Connector works with Core, but I couldn’t actually find a package for the Mac listed in their downloads so I gave up. The one that looked OK is Pomelo.

Pleasantly enough, Pomelo.EntityFrameworkCore.MySql  is one of the packages listed when firing up the Project | Add NuGet Packages… option in VSMac.

Simply add the package to your project at this point and you’re almost ready to go.

Changing the Entity Framework Provider to MySQL

This was also fairly straightforward. In the Startup.cs file, the database context needs to be adjusted to use MySQL.

From:

public void ConfigureServices(IServiceCollection services)
{
  // Add framework services.
  services.AddMvc();

  services.AddDbContext<WebTestContext>(options => options.UseSqlServer(Configuration.GetConnectionString("WebTestContext")));
}

To:

public void ConfigureServices(IServiceCollection services)
{
   // Add framework services.
   services.AddMvc();

  services.AddDbContext<WebTestContext>(options => options.UseMySql(Configuration.GetConnectionString("WebTestContext")));
}

The connection string in the appsettings.json was also changed to the MySQL flavour:

 "ConnectionStrings": {
 "WebTestContext": "server=localhost;port=8889;database=thedbname;uid=myuserid;pwd=thepassword"
 }

Once this was done, running

   dotnet ef update

on the terminal command line in the project directory (where the .csproj file is located) should attach to the configured MySQL instance and create the required tables (in this case, just one) for the sample project.

And voila – things run and the database is created. Quite impressive. You can even add a new movie. But alas, not a second movie…because…

Add Auto Increment to ID field in MySQL

For some reason, the Pomelo provider, or .NET, or something somewhere doesn’t know that EF relies on the ID field on the table being an auto incrementing field. This causes any table inserts beyond the first item to fail with a MySqlException: Duplicate entry ‘0’ for key ‘PRIMARY’error.

The fix is simple enough; either:

  1. Go into phpMyAdmin and change the row property for the ID column to check the A_I (Auto_Increment) box, then save changes; or
  2. Run a SQL command to do the same thing – something along the lines of
ALTER TABLE `Movie` CHANGE `ID` `ID` INT(11) NOT NULL AUTO_INCREMENT;

Notes

  1. Entity Framework Core references must be added manually when using VSMac – you can’t add this through NuGet right now. The .csproj file must be edited manually to create the references. This appears to be a bug / limitation with just VSMac. Since I started the project on VSWin and moved to VSMac, I didn’t have this problem. But I did with a new project started on VSMac. I suspect that moving back and forth between the environments may be quite feasible.
  2. Scaffolding for helping create Insert / Update / Create views does not appear to be present for VSMac while VSWin has an option to create an MVC Controller along with associated views. These would be really handy to help build out your basic CRUD functionality. However, there may be options using Yeoman. More to come.
  3. Razor view tag helpers do not seem to provide syntax highlighting in VSMac.

Next Steps

More research to come, but the next step will be configuring the project so it runs nicely under NGINX or Apache as a Kestrel reverse proxy on the Mac without Visual Studio hosting.