Which programming language is used in the computer of a car?

After work today, I was reading about the throttle response adjustment procedure to mitigate turbo lag in the 3.5 EcoBoost F-150.  For the non car nerds, I’ll save you a click – it’s just a way to re-calibrate the electronically controlled engine throttle, making it feel more responsive when the gas pedal is pressed.  Nearly every part of a car is computer controlled or monitored these days, so I got to wondering…what computer language(s) do Ford and other manufacturers use to write the software that is used by the car’s computer?

Some quick Googling indicated that C is overwhelming used in car electronic control modules.  This makes sense as C is very common in embedded systems because it provides easy access to hardware, has low memory usage, and most importantly – it’s fast.  Auto manufacturers use a specific implementation of C known as MISRA-C (Motor Industry Software Reliability Association C).  MISRA-C is actually a set of guidelines for programming in C that helps avoid bad code which could cause dangerous behavior while a car is in operation.  I’m not a C programmer and a lot of this goes over my head, but MISRA-C is just a rigidly defined programming style. When implemented, it will ensure that common errors and pitfalls programmers can make will be avoided entirely when writing software for a car computer.

According to Wikipedia, MISRA-C was originally targeted at the automotive industry, but it has evolved as a widely accepted model for best practices by embedded systems developers in other industries including aerospace, telecom, defense, railway, and others.  Pretty cool stuff!  If you’re interested, here is some more reading on programming car computers:

https://www.quora.com/Which-programming-language-is-used-in-the-ECU-of-a-car

https://stackoverflow.com/questions/1044271/automobile-programming-languages

http://www.embedded.com/electronics-blogs/beginner-s-corner/4023981/Introduction-to-MISRA-C

http://www.eas.uccs.edu/~mwickert/ece5655/code/MISRA/MISRA_C_key_rules.pdf

Here is an interesting excerpt from the last link:

Rule 59 (required): The statement forming the body of an “if”, “else if”, “else”, “while”, “do … while”, or “for” statement shall always be enclosed in braces.  Basically, this says that from now you must clean up your act, you can’t write sloppy things like the else clause in following example:

if (x == 0) 
{ 
    y = 10; 
    z = 0; 
} 
else 
    y = 20;

The idea of this rule is to avoid a classical mistake. In the example below the line z = 1; was added. It looks as though it’s part of the else clause but it’s not! In fact, it is placed after the if statement altogether, which means that the assignment will always take place. If the original else clause would have contained the braces from the beginning this problem would never have occurred.

if (x == 0) 
{ 
    y = 10; 
    z = 0; 
} 
else 
    y = 20; 
    z = 1;

 

Excel is NOT a database!

Working in the GIS field, it’s very common to gather data for a project from various adhoc sources, and often that data is delivered in the form of an Excel spreadsheet.  I’ll preface by saying that there’s nothing wrong with delivering data in this way.  Excel is a powerful and widely used tool in the business world, and just about everybody who’s ever worked in an office has opened an Excel spreadsheet.  But too often in my GIS career I’ve come across projects that are using Excel spreadsheets as a master repository, storing large amounts of tabular data within.  It’s not hard to imagine how data integrity can quickly become compromised when people are copying/pasting/emailing/editing/saving spreadsheets, both on their local disk and shared network folders.

So why isn’t Excel a database?  It has rows and columns and you can sort data, create formulas, query data with features like VLOOKUP right?  While those things may be true, there are several reasons Excel is not a database:

  • Excel does not allow columns to contain only one specific primitive datatype – integers, floating point decimals, boolean, string, etc.  In a database table, each column or field can be constrained to only allow one datatype.  Often there will be a column you want to only contain a specific value, like an integer for an ID number.  Excel does not provide any data validation, and will not prevent users from entering an invalid value for for the ID.  A database table would throw an error, not allowing text to be used for the integer ID column.  A database table can also disallow null or empty values for a column, requiring every row to have a value, useful for ensuring each row has an ID number for example.  Excel does not provide this functionality.

  • Excel does not allow multiple users to open and edit the same Excel file.  This is a very limiting factor experienced by many users who share files on network drives.  I have personally run into this issue quite a few times, having to ask a somebody on the team to close a file that they had open so I could make some edits.  Inevitably, myself or somebody else would make a copy of the file to work on locally, and then we’d have issues reconciling the edits each person made with the master copy.  A database is designed to allow hundreds or thousands of users to concurrently query, view, and edit the data.

  • Excel is slooooow.  Like really slow.  The file size can balloon when there are many rows and columns, many formulas, and special formatting, filters, etc in the worksheets.  Sorting worksheets with many rows can be painfully slow, and  out of memory errors can happen when the content of the Excel file becomes too large.  Databases are designed to maximize performance and offer nearly limitless space, depending on hardware configuration of course. (SQL Server can handle databases of up to 524,272 terabytes!)  Excel files should be out of the question when data tables have hundreds of thousands of rows or more or data.

  • Excel does not allow you to query or join data from multiple tables.  Well, it does kinda…features like PivotTablesVLOOKUP,and HLOOKUP can provide functionality similar to some database queries by summarizing data, searching for matching criteria, etc.  I won’t get into how to use these features but they do not quite give the same functionality and flexibility as a database provides.  A significant limiting factor of Excel is not being able to link or join tables between files – all of these operations must be done with worksheets in the same Excel file.  Where Excel really falls short however, is one to many relationships.  This is an important concept in data relationships, and these kinds of table joins cannot easily be performed in Excel:

  • Last, but certainly not least, Excel doesn’t provide any data backup or recovery tools.  You may have some automatic file backups on your computer’s hard drive or on a shared network drive, but these may be insufficient if an Excel file is destroyed or altered.  A database will have built in features to backup data, and sometimes allow the data to be rolled back to a given point in time.  Bringing data back online after a hardware failure is generally an easy process with most database software.  A database can also keep logs of all changes to the tables, making it possible to undo a specific table update if an error was made.  On a related note, databases will offer options to securely access your data as well.  Excel files offer little in the way of protecting the data from unauthorized viewers.

To summarize, Excel is a great tool for data analysis, presentation, and mathematical and statistical calculations.  It is a not so great tool for storing large amounts of data.  It’s often difficult to maintain data integrity using Excel, especially among multiple users.  When the content of the file becomes too large, Excel performance drops significantly making viewing and editing the data difficult.

For any readers out there who currently find themselves in a situation where they feel Excel is not quite the right tool for data storage, I would urge you to explore using a database for your project.  There are quite a few options out there, both free and paid, for setting up a database.  It’s probably a little beyond the scope of this post to discuss all the various database storage and retrieval types and the specific database software, but since I’m a .NET developer I would suggest trying SQL Server Express.   It’s easy to set up a local database server and begin experimenting with migrating data from Excel into database tables.  You may also already have Microsoft Access installed on your machine as part of Microsoft Office.

F-150: Every Day Carry

Recently I discovered reddit.com/r/VEDC, a community dedicated to discussing essential tools and supplies one should carry in their vehicle at all times – your “vehicle every day carry.”  I made a post in r/VEDC, but I will share it here as well.  I like to keep my truck pretty well stocked with tools and supplies that I may need to handle a car emergency, breakdown, or minor injuries.   Just about every item in my vehicle inventory has been used, or I was in a situation where I wish I had a certain tool or item so I added it to the collection for the next time.  I found some inspiration from the internet as well as from real world experience.  Flat tires, dead batteries, and small repairs are common when driving and camping in the Arizona mountains and desert.  I need to be able to repair my vehicle, or remain safe until help can arrive.  It’s also not uncommon to find a stranded motorist when off-roading, so I like to be able to offer help when possible.

First, the truck – a 2013 Ford F-150 3.5 EcoBoost:

Inventory:

And what isn’t pictured is probably the most important safety item you should be be carrying – water.  Living in Arizona this is especially important.  I learned this quickly after moving to this state when a serious accident backed up a mountain highway in both directions for miles.  It was summer, and highway patrol officers were walking up the highway, handing bottles of water to the motorists.  An unplanned extended stay in the desert in the summer time without water and air conditioning can get dangerous fast!  So if I’m going to be more than 15 minutes from a Circle K, I’m packing a gallon or two of water just in case.

The tokens that wouldn’t die

Here’s a funny post from The Daily WTF that shows some production code that allowed an API to issue tokens that were valid for nearly 50 years!

https://thedailywtf.com/articles/the-tokens-that-wouldn-t-die

I found this especially relevant to me as I’ve been working with API authentication recently and it reminds me to take extra care when dealing with theses security concerns, especially as someone who is new to this area of software development.

 

Making a POST request to an oAuth2 secured API using RestSharp

Recently, a coworker asked me how to best consume (using C#) an oAuth2 secured API which I had deployed.  I have been using RestSharp (along with JSON.NET) to make web requests in some of my applications recently, so I wrote a quick sample application for him demonstrating how to communicate with my API using those libraries.  I included it with the documentation for that API, but I want to share the basic concepts here as well.  Since the API is using oAuth2, the first step is to get an access token using an API key and password:

var url = "https://my.api.endpoint/GetToken";
var apiKey = "api_key";
var apiPassword = "api_password";

//create RestSharp client and POST request object
var client = new RestClient(url);
var request = new RestRequest(Method.POST);

//add GetToken() API method parameters
request.Parameters.Clear();
request.AddParameter("grant_type", "password");
request.AddParameter("username", apiKey);
request.AddParameter("password", apiPassword);

//make the API request and get the response
IRestResponse response = client.Execute<AccessToken>(request);

//return an AccessToken
return JsonConvert.DeserializeObject<AccessToken>(response.Content);

If you were successfully able to authenticate using your API credentials, you should receive a response that contains an access token and other information. Depending on the API you’re accessing, it may look similar to this:

{
  "access_token": "v5s5UckbViR9gZUXiu...",
  "token_type": "bearer",
  "expires_in": 43199,
  "userName": "api_key",
  ".issued": "Sun, 30 Jul 2017 17:05:37 GMT",
  ".expires": "Mon, 31 Jul 2017 05:05:37 GMT"
}

Now that the application has been authenticated and has be granted an access token, we can then provide this token when calling various API methods to get authorization.  Here is a sample POST request to my API, calling the DoStuff() method and including an object which contains the input parameters:

var url = "https://my.api.endpoint/DoStuff";

//create RestSharp client and POST request object
var client = new RestClient(url);
var request = new RestRequest(Method.POST);

//request headers
request.RequestFormat = DataFormat.Json;
request.AddHeader("Content-Type", "application/json");

//object containing input parameter data for DoStuff() API method
var apiInput = new { name = "Matt", age= 34 };

//add parameters and token to request
request.Parameters.Clear();
request.AddParameter("application/json", JsonConvert.SerializeObject(apiInput), ParameterType.RequestBody);
request.AddParameter("Authorization", "Bearer " + access_token, ParameterType.HttpHeader);

//make the API request and get a response
IRestResponse response = client.Execute<ApiResponse>(request);

//ApiResponse is a class to model the data we want from the API response
ApiResponse apiResponse = new ApiResponse(JsonConvert.DeserializeObject<ApiResponse>(response.Content));

And that’s pretty much it – the ApiResponse object now has all the data we need from the server response, whatever that may be depending on the API.  As you can see, both of these libraries together make sending and receiving data to/from a server very easy with just a few lines of code.  Getting authenticated with the API server, sending some data, and receiving a deserialized response is very simple.  More information about RestSharp and JSON.NET can be found here:

http://restsharp.org/

http://www.newtonsoft.com/json

 

Color Calibrate Your Monitor

When designing a web app or map, it is important to realize that the colors you see on your monitor may not be the same colors somebody else sees on their monitor or mobile device.  Perhaps you may have noticed that one of your monitors looks a little “off” compared to a second monitor, or maybe a map you designed looks different on screen when compared to a printed copy.  This is because your monitor color settings may not be calibrated correctly.  Most modern flat panel monitors will have on-board controls to adjust various display settings like brightness, sharpness, and contrast. Often they will come with preset options like “theater mode” or “game mode.” These work well for the average consumer, allowing them to quickly pick a color setting that looks good for their preferences.  However, if you’re doing color sensitive work – cartography, web development, photography, etc – you don’t want the monitor to just “look good,” you need your monitor to display as close to true colors as possible.  This will ensure that content created on your computer will display as accurately as possible when viewed on a wide array of devices – devices which may or may not have been properly color calibrated by their user.

So how do you color calibrate a monitor?  There are two basic ways to do this, the first involves purchasing a device that can sense the color output of your monitor and automatically adjust settings.  These devices are very accurate, but are more aimed at professional photographers and others who perform extremely color sensitive work.

The second method for color calibrating a monitor involves using test images and adjusting various hardware and software settings.  This method is sufficient for most web developers and GIS professionals, and depending on the make and model of the monitor, should yield sufficiently accurate color display.  If you’re using Windows, there are some basic built in settings accessible via Control Panel > Display > Calibrate Color:

This wizard will take you through some easy to follow steps and will yield a rough monitor calibration.  More information is available from digitaltrends.com, including information on performing the same task in MacOS.

I prefer to fine tune the calibration on my monitors a bit more, however. I use some test images from lagom.nl that allow me to properly adjust settings on both the monitor itself, as well as settings in the operating system or video card settings application.  Here is the first test image, which assists in adjusting contrast.  When properly calibrated, a monitor should display roughly equal steps in brightness over the full range from 1 to 32 for each color:

Adjust the contrast setting on your monitor until you can just barely see the first color bar, and you can also see a division between each subsequent color bar.  Here is a link to the complete set of monitor test images – each image tests a different setting and gives some instructions on what setting to adjust:

http://www.lagom.nl/lcd-test/

If you are using an NVIDIA graphics card, they provide some excellent utilities and information on how to properly calibrate monitor color:

https://www.geforce.com/whats-new/guides/how-to-calibrate-your-monitor