
I specialize in embedded vision, robotics, and AI at the edge, with over 30 years of DSP and FPGA-based embedded design experience. My career began in ASIC design and has evolved through deep learning platforms, reference designs, and accelerated inference on edge devices.
I have had the unique opportunity to benchmark various AI solutions, and have been reporting my results publicly.
Recently, I have been exploring computer vision and agentic AI approaches to help humans interact with robotics.
When not working, I am a passionate rock climber and woodworker.
Designing and deploying neural network inference on embedded accelerators — Vitis-AI on AMD/Xilinx Zynq-UltraScale+, Hailo-8, and Qualcomm QCS6490 (Hexagon NPU). End-to-end work spanning custom dataset curation, model exploration, quantization-aware training, and hardware-accelerated deployment. Hands-on with 2D classification, object detection, image segmentation, MediaPipe pose / hand / face pipelines, 3D point-cloud detection, and stereo neural inference.
In depth experience in building image-capture pipelines for AMD programmable logic platforms (Spartan-6, Zynq-7000 Soc, Zynq-UltraScale+, Versal AI Edge), including camera calibration and ISP tuning, for mono, dual(stereo), and multi-camera systems.
Combining computer vision and AI to make robots respond to humans and their surroundings. Built hand-controlled robot platforms (LeKiwi mobile base and robotic arm) on Qualcomm QCS6490; Explored agentic-AI control in ROS2 with LLMs evaluated for tool-use awareness; Created digital twins for closed-loop simulation and validation.
I am also known as AlbertaBeef