Skip to main content

10 Reasons why you should learn MCU bare metal programming

When I approached embedded Linux about 15 years ago, soon after I thought that compiling a whole system from scratch is one of the most fascinating activities for a low-level programmer. It gives you the feeling "you know all parts of the system, and can put your hands on it".

I think it's the same with nowadays microcontrollers (aka MCU, Micro Controller Units). Vendors usually release some libraries in the form of SDK, or provide a customized version of some Real Time Operating System (RTOS). My opinion, and my experience, is that you should look at them as a reference implementation for the chip, or for a quick prototyping of your application, but most often, for professional long term maintained project, you should prefer creating your own code base, for a number of reason I won't list here. This doesn't mean you have to rewrite it all and reinvent the wheel; nevertheless, a balanced mix of 3rd party libraries and own written code is usually better than taking a whole SDK from your vendor without any prior analysis on it.

Anyway, even if you think you will always use vendor's IDE and libraries, it is still worth knowing what's really happening when you use them, by trying to develop at least the first stages of the boot and the minimal set of drivers to check the chip and the board are alive: e.g. clock settings, UART speaking, GPIO control, and hopefully a few more.

Here are my 10 reasons for it:

1. I think it's simply funny: the first time I could make a LED blink by using a GPIO on a Olimex board, with no external libraries, I was really excited.

2. You will be a better firmware developer if you have a precise idea about how your CPU boots and how your binary code is built.

3. SDK are sometimes really bad; I've lost a lot of time on a buggy USB host implementation of an Atmel Library I was forced to use by my one of my clients, with no support at all even after submitting the issue in Atmel's bug tracker.

4. Even when the SDK or RTOS is a good one, sooner or later you will find a bug in it; if you don't know what a SoC register is, you're completely lost, and are forced to beg someone else fixing the bug.

5. If you keep an eye on flash size, and don't want a 100k binary for a LED blinking application, you'd better understand what your binary is composed of, and hopefully try to squeeze it a bit.

6. When your client, or your boss, will decide to change CPU family, if you simply used the SDK without much thought, you'll have to rewrite it all; on the other side, if your code was well structured, and you tried to relegate all SoC-related stuff separately, you'll be able to reuse some code.

7. Not all of the HW features are actually implemented in SDK libraries or RTOS drivers; for instance, timers and pwm/capture functionalities are very complex, and it's easier to read the datasheet and implement your needs, instead of trying to achieve the same by calling the SDK C functions.

8. Should you need to integrate a 3rd party library, perhaps conflicting with the SDK, your work will be much easier if you have a minimal previous experience with bare metal programming.

9. Ok, you're just thinking: "bare metal programming is too much for me, I was programming GUIs on Windows till yesterday". Whatever are your skills, you'll always learn something when going deep in the lower layers of source code stack: you'll learn something about C if you know Assembly, or understand Javascript better if you know C. So, you'll be a better RTOS user if you know bare metal programming. I suggest you to have a look at BaTHOS and BaTHOS MCUIO.

10. Maybe you're now convinced to spend a few days learning about bare metal; you talked about all of this with your boss, and he/she simply answered: "No way, I want it quick and dirty, the deadline is yesterday". You'd better change jobs.