Quantcast
Channel: EDN
Viewing all articles
Browse latest Browse all 664

A design platform for swift vision response system – Part 1

$
0
0

Trigger-based vision systems in embedded applications are used in various domains to automate responses based on visual input, typically in real-time. These systems detect specific conditions or events—for example, motion and object recognition or pattern detection—and trigger actions accordingly.

Key applications include:

  • Surveillance and security: Detecting motion or unauthorized individuals to trigger alarms or recording.
  • Robotics: Identifying and manipulating objects, triggering robotic actions like picking, or sorting based on visual cues.
  • Traffic Monitoring: Triggering traffic light changes or fines when specific conditions like running a red light are detected.
  • Forest monitoring: Trigger-based vision systems can be highly effective in forest environments for a range of applications, including wildlife monitoring, forest fire detection, illegal logging prevention, animal detection, trail camera, and more.
  • Military and defense: Vision systems used in drones, surveillance systems, and military robots for threat detection and target identification.

These systems leverage camera technologies combined with environmental sensors and AI-based image processing to automate monitoring tasks, detect anomalies, and trigger timely responses. For instance, in wildlife monitoring, vision systems can identify animals in remote areas, while in forest fire detection, thermal and optical cameras can spot early signs of fire or smoke.

Low wakeup latency in trigger-based systems is crucial for ensuring fast and efficient responses to external events such as sensor activations, button presses, and equivalent events. These systems rely on triggers to initiate specific actions, and minimizing latency ensures that the system can respond instantly to these stimuli. This ability of a device to quickly wake up when triggered allows the device to remain in a low-power state for a longer time. The longer a device stays in a low-power state, the more efficiently it conserves energy.

In summary, low wakeup latency improves a system’s responsiveness, reliability, scalability and energy efficiency, making it indispensable in applications that depend on timely event handling and quick reactions to triggers.

Aikri platform developed by eInfochips validates this concept. The platform is based on Qualcomm’s QRB4210 chipset and runs OpenEmbedded-based Linux distribution software.

To simulate the real-life trigger scenario, Aikri platform is put to low-power state using a shell script and is woken up by a real time clock (RTC) alarm. The latency between wakeup interrupt and frame reception interrupt at dual data rate (DDR) has been measured around ~400 ms to ~500 ms. Subsequent sections discuss the measurement setup and approach at length.

Aikri platform: Setup details

  1. Hardware setup

The Aikri platform is used to simulate the use case. The platform is based on Qualcomm’s QRB4210 chipset and demonstrates diverse interfaces for this chipset.

The current scope uses only a subset of interfaces available on the platform; refer to the following block diagram.

Figure 1 The block diagram shows hardware peripherals used in the module. Source: eInfochips

The QRB4210 system-on-module (SoM) contains Qualcomm’s QRB4210 application processor, which connects to DDR RAM, embedded multimedia card (eMMC) as storage, Wi-Fi, and power management integrated circuit (PMIC). The display serial interface (DSI)-based display panel is connected to the DSI connector available on the Aikri platform.

Similarly, the camera daughter board is connected to CSI0 port of the platform. The camera daughter card contains an IMX334 camera module. The camera sensor outputs 3864×2180 at 30 frames per second on four lanes of camera serial interface (CSI) port.

DSI panel is built around the OTM1901 LCD. This LCD panel supports 1920×1080 output resolution. Four lanes of the DSI port are used to transfer video data from the application processor to the LCD panel. PMIC available on QRB4210 SoM contains RTC hardware. While the application processor goes to the low-power mode, the RTC hardware inside the PMIC remains active with the help of a sleep clock.

  1. Software setup

The QRB4210 application processor runs an OpenEmbedded-based Linux distribution using the 5.4.210 Linux kernel version. The default distribution is trimmed down to reduce wakeup latency while retaining necessary features. A bash script is used to simulate the low-power mode entry and wakeup scenario.

The Weston server generates display graphics and GStreamer captures frames from camera sensors. Wakeup latency is measured by taking timer readings from Linux kernel when relevant interrupt service routines are called.

Latency measurement: Procedure overview

To simulate the minimal latency wakeup use case, a shell-based script is run on the Aikri platform. The script automates the simulation of trigger-based low latency vision system on Aikri QRB4210 module.

Below is the flow for the script performed on QRB4210 platform, starting from device bootup to measuring latency.

Figure 2 Test script flow spans from device bootup to latency measurement. Source: eInfochips

The above diagram showcases the operational flow of the script, beginning with the device bootup, where the system initializes its hardware and software. After booting, the device enters the active state, signifying that it’s fully operational and ready for further tasks, such as keeping Wi-Fi configured in an inactive state and probing the camera to check its connection and readiness.

Additionally, it configures the GStreamer pipeline for 1280×960@30 FPS stream resolution. The camera sensor registers are also configured at this stage based on the best-match resolution mode. During this exercise, 3840×2160@30 FPS is the selected resolution for IMX334 camera sensor. Once the camera is confirmed as configured and functional, the device moves to the camera reconfigure step, where it adjusts the camera stream settings like stop/start.

The next step is to set the RTC wake alarm, followed by triggering a device to suspend mode. In this state, the device waits for the RTC alarm to wake it up. Once the alarm triggers, the device transitions to the wakeup state and starts the camera stream.

The device then waits for the first frame to arrive in DDR and measures the latency between capturing the frame and device wakeup Interrupt Request (IRQ). After measuring latency, the device returns to the active state, where it remains ready for further actions.

The process then loops back to the camera reconfigure step, repeating the sequence of actions until the script stops externally. This loop allows the device to continuously monitor the camera, measure latency, and conserve power during inactive periods, ensuring efficient operation.

Latency measurement strategy

While the device is in a suspended state and the RTC alarm triggers, the time between two key events is measured: the wakeup interruption and the reception of the first frame from the camera sensor into the DDR buffer. The latency data is measured in three different scenarios, as outlined below:

  • When the camera is in the preview mode
  • When recording the camera stream to eMMC
  • When recording the camera stream to the SD card

Figure 3 Camera pipeline is shown in the preview mode. Source: eInfochips

Figure 4 Camera pipeline is shown in the recording mode. Source: eInfochips

As shown in the above figures, after the DDR receives the frame, it moves to the offline processing engine (OPE) before returning to the DDR. From there, the display subsystem previews the camera sensor data. In the recording use case, the data is transferred from DDR to the encoder and then stored in the storage. Once the frame is available in DDR, it ensures that it’s either stored in the storage or previewed on the display.

Depending on the processor CPU occupancy, it may take a few milliseconds to process the frame, based on the GStreamer pipeline and the selected use case. Therefore, while measuring latency, we consider the second polling point to be when the frame is available in the DDR, not when it’s stored or previewed.

Since capturing the trigger event is crucial, minimizing latency when capturing the first frame from the camera sensor is essential. The frame is considered available in the DDR when the thin front-end (TFE) completes processing the first frame from the camera.

Latency measurement methods

In the Linux kernel, there are several APIs available for pinpointing an event and time measurement, each offering varying levels of precision and specific use cases. These APIs enable tracking of time intervals, measuring elapsed time, and managing system events. Below is a detailed overview of the commonly used time measurement APIs in the Linux kernel:

  • ktime_get_boottime: Provides the current “time since boot” in a ktime_t value, expressed in nanoseconds.
  • get_jiffies: Returns the current jiffy count that represents the number of ticks since the system booted. Time must be calculated based on the system clock.

Jiffies don’t update during the suspend state, while ktime_t continues to run unaffected by interrupts even in sleep mode. Additionally, ktime_t offers time measurements in nanoseconds, making it highly precise compared to jiffies.

  1. Usage of GPIO toggle method for latency measurement

To get a second level of surety, a GPIO toggle-based method is also employed in the measurement. It creates a positive or negative pulse when a GPIO is toggled between two reference events. The pulse width can be measured on an oscilloscope, signifying latency between the two events.

When the device wakes up at that point, the GPIO value is set to zero, and once the camera driver receives the frame in the DDR, the GPIO value is set to one. This way the GPIO signal creates a negative pulse. Measuring the pulse width using an oscilloscope provides latency between the wakeup interrupt and the frame available at interrupt.

  1. Usage of RTC alarm as wakeup source

The RTC in a system keeps ticking while using a sleep clock even when the processor goes to the low-power mode, continuously maintains time, and triggers a wake alarm when it reaches a set time. This wakes the system or initiates a scheduled task that can be set in seconds from the Unix epoch or relative to the current time.

On Linux, tools like rtcwake and the /sys/class/rtc/rtc0/wakealarm file are used for configuration. The system can wake from power-saving modes like suspend-to-RAM or hibernation for tasks like backups or updates. This feature is useful for automation but may require time zone adjustments as the RTC stores time in UTC.

  • The RTC wake alarm is set by specifying a time in seconds in sysfs or using tools like rtcwake.
  • It works even when the system is in a low-power state like suspension or hibernation.
  • To clear the alarm, write a value of zero to the wake alarm file.

A typical trigger-based system receives triggers from external sources, such as an external co-processor or the external environment. When simulating the script, the RTC wakeup alarm is used as an external event, acting as a trigger for the QRB4210 application processor, which is equivalent to the external event.

Jigar Pandya—a solution engineer at eInfochips, an Arrow company—specializes in board bring-up, board support package porting, and optimization.

Priyank Modi—a hardware design engineer at eInfochips, an Arrow company—has worked on various Aikri projects to enhance technical capabilities.

Editor’s Note: The second part of this article series will further expand into wakeup latency and power consumption of this trigger-based vision system.

Related content

The post A design platform for swift vision response system – Part 1 appeared first on EDN.


Viewing all articles
Browse latest Browse all 664

Trending Articles