Industrial edge AI supports real-time analytics in sectors like manufacturing and building management. Low-power processors deliver modest AI performance, but designers must balance power budgets, temperature ranges, and cooling constraints. Using TOPS per Watt as a metric helps engineers compare platforms by energy efficiency and choose a solution that meets both performance and environmental requirements.
Edge computing has secured a central role in industrial and building automation, with industrial gateways increasingly performing tasks like monitoring process conditions, detecting anomalies in workflows, and optimizing the power consumption of on-site equipment. These capabilities are enabled by the growing intelligence of edge devices, facilitated by the introduction of artificial intelligence (AI) platforms that can perform dedicated workloads directly on the device and in real time, instead of on remote cloud servers.
The range of edge AI workloads is broad, spanning simple classification tasks executed on integrated micro neural processing units (microNPUs) to complex vision pipelines for product classification on conveyor belts in quality inspection systems. In the latter scenario, the edge device requires dedicated AI hardware accelerators to process data from multiple sensors or cameras with minimal latency.
To describe, measure, and compare AI performance, tera operations per second (TOPS) has become a widely used metric, but TOPS specification alone is insufficient for evaluating AI hardware suitability for industrial edge applications. When selecting the right edge device, users must balance AI performance with power consumption, thermal management, and reliability in −20°C to +70°C environments.
Normalizing AI performance using TOPS/W
When selecting AI hardware for edge applications, it is important to consider how real-world thermal conditions—and the resulting power constraints—affect available computing performance when compared to peak performance. For this reason, TOPS per Watt (TOPS/W) has emerged as a more meaningful comparison metric for edge AI platforms.
By directly relating available AI computing performance to power consumption, TOPS/W enables a more reasonable comparison between different AI hardware approaches. For industrial gateways, three typical AI hardware solutions can be distinguished, as shown in Table 1.
Table 1: A comparison of different edge AI hardware solutions for industrial gateways.
| Hardware Type | SoC with microNPU (Arm Ethos NPU) | GPU module (NVIDIA Jetson Orin) | Dedicated hardware accelerator (Hailo-8) |
|---|
| Typical power consumption | ~1 to 2 W | ~5 to 15 W | ~2.5 W |
| AI efficiency | ~0.5 TOPS/W | ~2.7 to 7.5 TOPS/W | >10 TOPS/W |
| Ideal application areas | Energy-efficient always-on edge devices, presence and status detection, data preprocessing | Autonomous mobile robots, process control optimization, multi-camera systems | Multi-sensor fusion in predictive maintenance, anomaly detection, object counting |
| Advantages | Low thermal requirements enable fanless designs | Complex workloads can be handled efficiently at high utilization | Leading TOPS/W efficiency; passive cooling possible depending on the workload |
| Downsides | Limited performance for complex workloads like real-time vision | Typically requires active cooling, increasing system complexity; relatively expensive | Requires a host processor, which strongly influences system-level efficiency; relatively expensive |
For many resource-constrained industrial gateway applications, a system on chip (SoC) with an integrated microNPU represents a balanced compromise, offering sufficient AI performance at minimal power consumption that significantly simplifies thermal management. For example, the NXP i.MX 93 SoC has demonstrated energy-efficient edge AI pose estimation running directly on its integrated microNPU. Here, the model detects body postures or potentially hazardous situations with ultra-low latency to improve industrial worker safety. Continuous operation is enabled by the microNPU’s minimal computational, thermal management, and power demands as part of a complete system. Considering TOPS/W rather than peak TOPS performance enables users to recognize this capability, without being misled by the hidden thermal management requirements of more powerful hardware.
An Edge Gateway for Low-Maintenance AI Applications
To support reliable edge AI functionality in environments ranging from −20°C to +70°C, users need a robust industrial gateway that enables connections to local sensing and control infrastructure. Based on SECO’s SOM-SMARC-MX93 computer-on-module (COM), the Modular Link MX93 presents a fanless DIN-rail mounted industrial PC measuring 140 x 96 x 36 mm.
At its core is the aforementioned NXP i.MX 93 SoC, which features two Arm Cortex-A55 application cores, one Arm Cortex-M33 real-time core, and an Arm Ethos-U65 microNPU delivering up to 0.5 TOPS for edge AI workloads. Shown in Figure 1, this platform supports up to 2 GB LPDDR4 memory, dual Gigabit Ethernet, and multiple high-speed interfaces, including dual USB 2.0, digital I/O, and serial interfaces.
See figure 1: The SECO Modular Link MX93 industrial gateway provides sufficient AI performance for industrial and building automation applications.
For industrial integration, the flexible I/O options are particularly important: A software-configurable UART RJ12 connector supports RS-232, RS-422, or RS-485, making the platform well suited for retrofit scenarios involving existing installations and legacy equipment. The modular design also features stackable daughter systems that allow expansion to accommodate application-specific connections.
In the safety scenario mentioned before, the Modular Link MX93 would connect directly to an industrial camera or vision sensor via its high-speed interfaces. Video streams are preprocessed on the NXP i.MX 93 SoC, while the AI inference itself runs on the integrated Arm Ethos-U65 microNPU. The resulting data is evaluated locally to detect unsafe postures or movements, allowing the system to trigger warnings or forward events to higher-level control systems, without the need to stream raw video data to the cloud.
In addition to a formidable hardware build, the Modular Link MX93 is complemented by Clea, SECO’s comprehensive software ecosystem that supports device management, updates, and the secure operation of distributed edge AI deployments. The Yocto-based Clea OS simplifies application deployment on the Modular Link MX93 and across the majority of SECO edge products. By completing the hardware-software ecosystem in this way, SECO supports industrial gateway users in gaining the full benefits of edge AI for the duration of the deployment lifecycle.
Selecting Edge AI Hardware for Industrial Applications
As the number of edge AI hardware options increases with market demand, choosing the right solution for an industrial or building automation deployment remains a challenging task. Still, a structured approach helps avoid poor design decisions at the outset, so users should follow three key steps when selecting an edge AI platform:
- Match computing performance to the actual use case; not every application requires maximum TOPS.
- Use TOPS/W as a selection criterion to identify energy-efficient platforms for simple AI tasks and reserve higher-performance accelerators for more complex applications.
- Consider system-level factors, including power supply, thermal management, environmental conditions, I/O requirements, and the available software ecosystem, as these are critical for long-term operation.
For many applications operating under limited power budgets, the microNPU-based AI processing of the SECO Modular Link MX93 offers a well-balanced solution. With 0.5 TOPS of AI performance contributing approximately 1 W of power consumption to the total system, thermal management is simplified, enabling robust, low-maintenance edge AI gateways for use in industrial automation, robotics, building management technology, and beyond.
To learn more about edge AI for industrial applications, contact SECO experts today and gain the full benefits of their edge computing expertise.