Machine Learning for Beginners: Now You Don’t Need a Data Science Degree to Build AI at the Edge
Empowering Embedded Engineers in AI Development
Developing artificial intelligence (AI) applications used to be the preserve of data scientists and specialist engineers who were well-versed in building effective machine learning (ML) AI algorithms and deep learning neural networks. And while this might remain true for headline-grabbing generative AI applications such as ChatGPT, a growing demand to deploy AI at the device level means that the task of creating such machine learning applications is also falling to embedded engineers with little or no previous AI experience.
The good news is that hardware manufacturers and software developers have recognized this trend and are coming together to create a growing ecosystem of devices, tools and support. This ecosystem makes it easier than ever before to implement AI and ML algorithms for collecting and organizing data, training neural networks and implementing optimized inference on the edge.
A Changing Approach to AI Development
Historically, the majority of machine learning implementations demanded significant resources and needed to be written in complex languages for powerful computers or cloud servers. Models were developed by qualified data scientists who would spend a great deal of time developing the algorithms capable of understanding patterns and correlations from very large data sets to create predictive AI models.
Figure 1: Potential applications for AI/ML
Now, however, a growing demand to reap the benefits of AI across more and more applications (see figure 1) is driving two major trends.
The first is to take AI away from centralized servers by embedding it into smart, connected devices. This so-called Edge AI or AI-at-the-Edge allows small battery-powered devices to detect complex motions, recognize sounds, classify images, find anomalies in sensor data and quickly process a whole host of other real-world inputs without the need to connect to a remote server.
The second is a growing demand for embedded engineers, most of whom have little or no previous experience of ML or AI, to deliver AI-based solutions.
Supporting Edge AI Development
One of the things that is helping to accelerate the exponential growth of Edge AI has been the emergence of the concept of “Tiny Machine Learning” or TinyML. As per the definition of the tinyML Foundation, this is the “fast growing field of machine learning technologies and applications including hardware (dedicated integrated circuits), algorithms and software capable of performing on-device sensor (vision, audio, IMU, biomedical, etc.) data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery operated devices.”
Support for the TinyML movement has resulted in an explosion of tools and support that make the job of the embedded design engineer easier and ensure that ML development need no longer be the preserve of data scientists or dedicated engineers with knowledge and experience of neural networks and complex programming languages.
Processing Hardware
Of course, Edge AI applications still have to run on processors, though clearly not the high-power, high-performance compute (HPC) processors one finds in today’s data centers. Indeed, one of the key challenges for implementing successful AI-at-the-Edge has always been delivering the requisite processing power within ultra-low power budgets and tight form factors. As models grow in complexity, Edge AI devices undergo a steep accuracy-resource demand tradeoff that means the connected device must be equipped with a more expensive processing chip that consumes more energy. This is why, to date, the majority of machine learning models deployed on smart devices have required that the model is small and the AI task is limited to solving a simple problem.
Fortunately, the latest microcontroller (MCU), microprocessor (MPU) and Field Programmable Gate Array (FPGA) technologies are addressing this challenge by helping developers to build more complex Edge AI applications without compromising on power or size. Advances, for example, such as low-power 32-bit MPUs that incorporate dedicated camera interfaces, enable developers to design low-power stereo vision applications with more accurate depth perception while allowing for fine-tuning of power consumption vs. performance.
Figure 2: Low-power, high-performance Edge AI MPU with embedded camera interfaces
What’s more, a combination of the latest semiconductor technology and the availability of algorithms that have been specifically developed to enable AI/ML to perform more efficiently on edge devices means that it is now possible to deliver ML solutions on the lowest power 8-bit and 16-bit microcontrollers.
And whether developing with an advanced FPGA or the lowest cost 8-bit MCU, there is a wealth of additional support tools available to make life easier for the engineer by simplifying design and testing, speeding prototyping and reducing the overall time-to-market for Edge AI solutions. These include reference designs, Accelerator Software Development Kits (SDKs) for programming power-efficient neural networks without any prior FPGA design experience and “out-of-box” ML development and evaluation boards that bring together MCUs, gyroscopes, accelerometers, motion detectors and a variety of other sensing technologies in a single unit.
The Future
A recent report by analysts at Grand View Research, Inc. anticipates that the global edge AI market will see a compound annual growth (CAGR) of 21% from 2023 to 2030 to reach a size of $66.47 billion. The report cites a key driver for this growth as the rising demand for IoT-based edge computing services in which massive amounts of sensor data need to be analyzed for the automation of operational decisions in applications ranging from intelligent surveillance and security and smart farming to industrial manufacturing.
In support of this growth, we can expect further hardware and software developments that are designed to simplify and speed the development of Edge AI applications and make AI and ML design a possibility for each and every embedded engineer. The development challenge will be further simplified by increased availability of open-source data sets that eliminate the time-consuming need to collect data related to specific applications.
Finally, as the rise of the generative AI applications has shown, AI itself is becoming a useful tool for allowing novices to code complex applications, while considering ML training versus direct coding could lead to shorter development times, fewer re-spins and improved output-per-engineer.
Yann LeFaou, Apr 18, 2024
Tags/Keywords: AI-ML
Comments