Teqie Trolley: AI Robotics for IoT & Embedded Systems

Robotic arm holding AI brain

The future of robotics is increasingly tied to its ability to learn, adapt, and interact intelligently with the real world. The Teqie Trolley project is a foundational initiative focused on establishing a cutting-edge, AI-driven robotics platform – built for experimentation, development, and deployment in environments that demand precise perception, control, and autonomy.

Whether you're managing a product roadmap for smart devices, leading an IoT team, or engineering embedded systems, this initiative offers a real-world glimpse into a scalable, sensor-rich system architecture that bridges software intelligence with robust hardware.

Vision: A Learning, Perceiving, Acting Machine

The goal behind Teqie Trolley is to create a platform that goes beyond simple automation. It is designed to operate with contextual awareness and decision-making capabilities, powered by AI and sensor-rich architecture.

Example Tasks the Platform Can Perform:

  • Line and edge tracking using infrared and vision-based systems;
  • Obstacle detection and dynamic rerouting;
  • Indoor localization and navigation via LiDAR and IMU fusion;
  • Object recognition or person-following based on camera input and AI inference.

Data Ingestion and Processing Capabilities:

  • 2D and 3D visual streams from HD and depth cameras;
  • Time-of-flight measurements for depth awareness;
  • Real-time ultrasonic and infrared proximity data;
  • Orientation and acceleration data from a 9-axis IMU;
  • LiDAR point clouds for spatial mapping.

These data sources are processed both on-board and at the edge, enabling real-time decision-making without reliance on cloud computation.

Hardware Architecture: Sensor-Rich, Modular, and Scalable

The Teqie Trolley integrates a wide variety of components through a layered architecture, separating motion control, perception, and system coordination:

Softeq Building Teqie Trolley Graphic (880 x 645 px)

Motor Control & Locomotion

Locomotion is handled by an embedded control platform built around the STM32F303ZE microcontroller, hosted on an ST NUCLEO-F303ZE board. This Cortex-M4 MCU provides deterministic control over motors and real-time feedback processing.

Key components include:

  • L298N Motor Driver: Dual H-bridge controller driving two high-torque MG996R servo motors for differential steering;
  • ICM-20948 IMU (9-axis): Enables orientation tracking and inertial dead reckoning. Especially valuable for wheel slippage compensation and balance estimation;
  • TCRT5000 Infrared Sensor: Often used for line-following or edge detection, useful for structured environments like warehouses;
  • HC-SR04 Ultrasonic Sensor: Provides distance measurements for basic obstacle detection, often used to initiate stops or detours;

This subsystem is optimized for low-latency motor control, leveraging PWM signals, encoder feedback (if added), and real-time safety cutoffs.

Computer Vision and Perception

The perception subsystem runs on a SPEAR-MX8 CPU board, a multicore Arm platform suitable for edge inference. It integrates several perception modalities:

  • Logitech Full HD Camera: Captures video feeds used in visual navigation, object recognition, or gesture-based interaction;
  • LD14P 360° LiDAR: Produces real-time point cloud data for mapping, obstacle avoidance, and SLAM (Simultaneous Localization and Mapping);
  • Time-of-Flight (ToF) Camera (planned integration): Offers high-precision depth sensing ideal for dynamic environments and 3D space interaction;

This allows the platform to combine classical CV techniques with ML-based scene understanding, making it suitable for semi-structured and unstructured environments.

Expandable Input-Output Interface

The platform is built with extensibility in mind. Its I/O architecture supports:

  • Additional digital/analog sensors (temperature, gas, touch);
  • Actuator control (grippers, pan-tilt units);
  • Communication interfaces (I²C, SPI, UART, CAN);
  • GPIO pins for custom hardware integration,

This flexibility ensures the Teqie Trolley can evolve with specific domain requirements, from indoor delivery to interactive robotics.

Software Stack: Embedded Linux Meets Machine Learning

The software architecture blends open-source frameworks with custom tooling, enabling rapid development and integration:

  • Operating System: Yocto Linux OS for reliable multitasking, real-time extensions, and compatibility with modern frameworks;
  • Machine Learning Framework: TensorFlow is used for model deployment, supporting tasks like object detection or gesture classification;
  • Computer Vision: Powered by OpenCV, handling frame capture, filtering, image transformation, and pre-processing;
  • Custom Utilities: Softeq has developed internal tools for:
    • Motor control calibration;
    • Sensor data visualization and configuration;
    • Diagnostics and debugging over serial or remote interfaces.

These tools create a robust development loop, allowing engineers to iterate on control strategies, perception algorithms, and AI behaviors in a closed feedback cycle.

Application Value and Broader Impact

The value of the Teqie Trolley lies in its adaptability and extensibility. As a real-world robotics testbed, it supports prototyping across domains like:

  • Autonomous delivery carts in indoor environments (e.g., hospitals, warehouses);
  • Educational or research platforms for autonomous navigation and AI;
  • In-building mapping systems for digital twin applications;

By focusing on scalable, edge-enabled robotics, this platform bridges the gap between high-level AI and low-level embedded design, enabling exploration from algorithm design down to control loop tuning.

en