text.skipToContent text.skipToNavigation

 

 

Machine Learning: The Knowledge and Skills Gaps That Developers Must Bridge

New skill sets will be in demand as embedded developers work on artificial intelligence projects

Read this to find out about:

  • The training and inference phases of a machine learning development project
  • The terminology and jargon commonly used by providers of machine learning technology
  • The features of the tools and design environments for machine learning development provided by MCU and FPGA manufacturers

The most loudly heralded breakthroughs in machine learning and the wider domain of Artificial Intelligence (AI) have been made by the computer science community. From autonomous driving to computers which beat chess grand masters, the best known achievements in AI have depended on the deployment of massive computing resources – for example, arrays of ultra-high speed, high-power Graphics Processing Units (GPUs) running millions of lines of code.

With less fanfare, embedded device designers are beginning to bring the benefits of AI to edge devices that impose much tighter constraints than computer scientists face: embedded devices provide orders of magnitude less processor bandwidth and memory than the data centers running large-scale AI applications.

Despite the difference in the hardware resources available to them, embedded engineers today are largely using development processes, tools and frameworks which originated in the world of computer science. This means that the AI development process can appear overwhelming to electronics engineers who have previously used Integrated Development Environments (IDEs) targeted at embedded hardware components such as microcontrollers or FPGAs. AI projects also call for the deployment of skills and knowledge – such as the acquisition, selection and curation of a training data set – which they never needed before in conventional electronics system developments.

But as this article describes, semiconductor manufacturers are starting to extend their products’ capabilities and their tool chains to support the requirements of embedded AI projects.

Interestingly, native embedded approaches to machine learning are also emerging which strip out much of the complexity in AI software, eliminating the need for computer science know-how.

Neural Networking Introduces a New Development Workflow

The basic process of machine learning consists of just two stages: training the neural networking model, the ‘training phase’; and deploying this neural network on a target device, ‘the inference phase’. In an embedded environment, this target device will most often be a local or ‘edge’ device based on a microcontroller, an applications processor or an FPGA.

TV_ML-Blog#3_Fig-01.jpg

Figure 1 The two phases of every machine learning development project. (Image courtesy of NXP Semiconductors.)

 

So far, so simple, it seems. But within each phase are various development tasks which are unfamiliar to an embedded developer who has no previous experience in machine learning. Below, NXP’s diagram outlines the tasks within each part of the workflow.

TV_ML-Blog#3_Fig-02.jpg

Figure 2: The basic workflows in an embedded machine learning development. (Image courtesy of NXP Semiconductors.)

 

Not only is the process itself new to the embedded engineer: so too is much of the technology, terminology and jargon. Before training a neural network, for instance, the developer will need to decide which kind of network is most appropriate for the application. Convolutional neural network (CNN) models are widely used for image recognition applications, while a Finite State Machine (FSM) might be appropriate for recognizing patterns in time-series data. A basic catalogue of neural networks hosted at towardsdatascience.com lists more than 25 types.

For each neural network type, there are often hundreds of algorithms optimized for specific functions. These software elements tend to have their own jargon which can be hard for the first-time user to interpret. NXP’s list of neural networking models supported by its eIQ enablement tool provides examples of neural network algorithms.

None but the largest embedded development teams will have the time and resources to educate themselves on all the key aspects of machine learning before beginning their first AI project. For smaller development teams, there is an alternative: component manufacturers such as NXP and Lattice Semiconductor have developed production-ready reference design hardware and software for applications such as people detection, people counting and speech recognition. QuickLogic also provides a low-power sound detector solution and a speech-recognition solution running on its QuickAI™ platform. These provide the easiest and quickest introduction to machine learning. Lattice even provides its training data sets to enable OEMs to modify the neural networking model contained in each reference design.

Training Phase: Expert Help Available from Future Electronics

If the intended application is not supported by a ready-made reference design, the OEM will need to implement the training and inference processes. Of the two phases, the inference phase is the more familiar to embedded developers: essentially, this involves taking a trained model and compiling it for a specific hardware target, such as an i.MX RT crossover microcontroller from NXP, an STM32F7 MCU from STMicroelectronics or QuickLogic’s QuickAI platform.

To a greater or lesser extent, the suppliers of these hardware devices provide development tools which make the process of compiling a trained model to target hardware reasonably intuitive and straightforward. NXP, for instance, provides the eIQ enablement tool for its MCUs and applications processors. The eIQ tool supports TensorFlow Lite, Arm® NN, OpenCV and other inference engines. Likewise, ST supplies the STM32Cube.AI tool for converting a neural network into optimized code for specific STM32 MCU parts.

The big difference from a standard embedded development workflow is in the training phase. In a typical MCU development project, the code base for an entire application may be created within a single IDE such as IAR Embedded Workbench or Keil MDK.

In a machine learning project, however, the training phase is not supported within an MCU’s, processor’s or FPGA’s development environment: the embedded engineer is thrown into new territory.

Referring to the NXP workflow diagram above, each stage of the process calls for specialist know-how and techniques. Engineers preparing a training data set for the first time will have much to learn about how to collect raw data, how to label and curate it, how to extract features and so on before submitting it to a model training framework such as TensorFlow Lite, Caffe or Keras. Likewise, each of these frameworks has its own process flow, user interface and data protocols.

There is abundant documentation available online for embedded engineers to study. But no matter how much an engineer has prepared in theory, there is no substitute for getting their hands dirty with a prototype project – and in the early stages of a project, developers can gain a huge amount from the advice and guidance of machine learning experts.

This is where Future Electronics has much to offer: its large team of branch-level field applications engineers is supplemented by specialists in high-demand technology areas, of which machine learning/AI is one. Regional Advanced Engineer Specialists in AI are dedicated to this field, and are on hand to guide OEM developers, either up-front in planning a new development project, or during a project to help solve particular problems.

Through the regional Centres of Excellence, an OEM can even outsource part or all of a design project to Future Electronics, providing a complete turnkey solution for machine learning.

An AI Toolkit Built for The Embedded World

As described above, most of the tools and frameworks supported by MCU, processor and FPGA manufacturers are derived from the computer science world: they are large, sophisticated, highly capable – and difficult to learn in a short period of time.

This is why the approach taken by SensiML, a subsidiary of programmable system-on-chip manufacturer QuickLogic, is different and interesting. SensiML, which has its roots in a division of Intel, created its SensiML Edge AI Software Toolkit to provide a complete, end-to-end environment in which embedded developers could be instantly productive.

TV_ML-Blog#3_Fig03.jpg

According to SensiML, its Edge AI Software Toolkit ‘enables developers to build intelligent sensing devices in days or weeks without data science or embedded firmware expertise’. It can be used to develop applications such as industrial machine predictive maintenance, activity monitoring in consumer wearable devices, livestock monitoring in smart agriculture, and traffic analysis for retail stores.

The process shown above, as in the NXP diagram at the top of the article, breaks the development down into a training phase (steps 1-3) and an inference phase (step 4). But the difference, according to SensiML, is that it is fast, intelligent and complete.

  • It calls for no hand-coding, but generates code automatically
  • It requires no expertise in data pre-processing – all the developer has to do is collect data samples. The toolkit includes the SensiML Data Capture Lab module to support data capture.
  • It automates the entire process, from collecting the training data through to the generation of a trained algorithm.

Because the SensiML toolkit was designed for embedded engineers, it does not assume that the output has to be a complex neural network model. For applications that generate time-series data, such as predictive maintenance or personal activity monitoring, a simpler algorithm type such as a classifier is often superior to a neural network. This simpler algorithm is not only easier to generate, modify and refine. It also requires fewer resources in the target hardware, enabling the OEM to build a project around a low-power target such as an Arm Cortex®-M core-based MCU or QuickLogic’s own QuickAI™ programmable platform, whereas a more complex neural network model might typically require an applications processor or mid-density FPGA.

A Helping Hand as New Resources Emerge

Machine learning is such a new phenomenon in the embedded world that the offerings from different manufacturers have not become standardized, and there are today wide differences in the scope of the support provided by different suppliers’ toolchains for AI projects.

While SensiML provides the most comprehensive toolkit, the environments and services provided by manufacturers such as ST and NXP for MCUs and processors, as well as Lattice and Microchip Technology for FPGAs, support a growing number of the popular training frameworks and provide optimized compilation performance for their own products.

And where a gap between third-party frameworks and the semiconductor manufacturers’ tools has to be bridged, specialist experts at Future Electronics are on hand to provide guidance, know-how and hands-on assistance.

TV_ML-Blog#3_SensiML_Inference_Engine_Fig-04.jpg

Figure 3 The simple machine learning workflow provided by SensiML. (Image courtesy of QuickLogic)