Almost all software development on the main algorithm occurred before the start of this journal.

Goal

Assemble PCB

  • Apply solder paste and place components (about 1 hr)
  • Reflow board in toaster oven

    • Solder paste melted at ~190°C, took out at ~210°C
    • Data-logged thermal profile: Thermal profile
  • cleaned up excess solder bridging on MCU and motor driver

  • Soldered pin headers, motors, and buttons by hand

Assembling circuit boards on Thanksgiving

Testing PCB

  • Supply with constant voltage and current power supply @ 6V

    • Able to program MCU
    • MCU peripherals (internal clock, timers, ADCs) work
    • Motors spin (at low speed)
    • Temperatures on MCU and motor driver are low (<30°C)
  • No issues so far, so increase to 7V

    • Still no problems, seems that everything is good!

Goal

Further testing with motors and sensors

  • Program to use sensors for motor powers and directions
  • Motors appear to respond properly w/ one ~4.2V LiPo battery
  • Erratic behavior / apparent MCU freezes w/ 2 batteries
  • Left motor appears to spin faster than right motor when spinning forward, right spinning faster than left when spinning backward
  • Inspecting with debugger shows core jumps to bad addresses
  • Guessing cause is electical noise from motors / motor driver

Reducing noise

  • Cut power trace and test using independant power supplies
  • MCU core does not appear to freeze any more
  • Add capacitors across motors and across supply lines to driver - does not appear to do anything
  • Add inductor separating motor power supply from digital - core appears to behave normally now

Motors are not in sync

  • Motors still don't spin at the same speed
  • Examining PWM signal using a logic analyzer reveals short ~20µs pulses in between normal control signal
  • Determine that using HAL libraries for setting PWM frequency on MCU stops PWM generation while writing (bringing the line high for the ~20µs)
  • Set PWM frequency instead writing directly to (CCR* - capture / compare) registers
  • No more short pulses in PWM control signal, motors are now running at a similar speed

Overheating components

  • Inductor / choke heating up to 100+°C extremely quickly w/ 2 batteries and 20% power
  • Motor driver heating up to 110+°C within 10 seconds w/ 2 batteries and 20% power
  • Run for ~20sec, motor driver reaches thermal shutdown (~175°C internal), case ~120°C

Finding causes

  • Carefully inspected all control signals, seem to have no problems
  • Inspected motor voltages w/ osciliscope:

  • Increase in voltage of base line of ~1.5 - 2V(!) (they should be ~0V)
  • Lots of energy lost as heat (heating motor driver and probably choke on direct opposite side)
  • Tested motor driver alone by replacing motor w/ resistor, see the same problem so issue with motor driver
  • Conclude that the motor driver is bad (probably from the repeated 20µs pulses that it was getting before)
  • Cut out and replaced motor driver with a new chip, temperature only got up to ~35°C
  • Signal to motors was much cleaner and had flat baseline:

Website / Documentation

  • Set up this website
  • Added documentation for past work
  • Created video timelapse of creating circuit board using Kdenlive

VCS

  • Put (most) sources in git - 3D-printed objects, circuit board, website
  • Currently hosted on Bitbucket, planning to push to public GitHub repos after competitons are over

Circuit board redesign

See git commits 363985c - 442425:

442425f Added thermal vias
120da1a Routed
61fb822 Re-routed power
2c7e542 Changed grounds
4d31d87 Power routing & prep for autoroute
a8609e5 Changed drill hole size and moved decoupling cap
318047b Re-arranged headers
afdc002 Moved footprints around to match physical
835f5a4 Flipped cutoff around
b322cc6 More preliminary routing
dfaf3f0 Changed servo to use VBAT power
fbfa1d2 Initial routing
2e4c0a3 Moved OLED to cut-off board
9ef2799 Moved motor driver RCP circuit to front
5652a16 Preliminary layout
6ba1692 Changed power supply
ccd8f04 Power rewire
363985c Moved hall-effect sensors

Handle:

  • Has bumps to fit MPU's (both 6050 and 9150 breakout boards) Handle

MPU6050 and 9150 shenanigans

  • Used breakout board to test interfacing with MPU
  • Used Arduino to calibrate MPU6050

    • Plans to port calibration to STM32F4
  • Attempts to run sensor fusion on STM32F4

    • So far without success:
    • Appears to have correct raw measurements
    • Seems to be problem in fusion / AHRS calculation
    • Will test using original AHRS code (not refactored version)

Sensor fusion testing with MPU6050

Fixed using Madgwick sensor fusion

  • Converted units properly to fusion algorithm (convert from degrees to radians)
  • Used actual time / frequency instead of constant ideal frequency

Experimentation with using Invensense MPU libraries

  • MPL sensor fusion is less reliable and more complicated than just using Madgwick
  • Self-calibration is less accurate than custom self calibration code (takes less time / samples)
  • Possibly might use MPL for compass fusion for automatic magnetic disturbance rejection

Work on moving code

Created generic PID class

Syncing motors with PID

  • After syncing robot goes mostly straight, slightly to right
  • Can't see if encoders are in sync because printing would mess up timings

Turning with MPU (90°, but should work with any amount)

  • Very accurate turns with Madgwick sensor fusion algoritm
  • Slight overshoot if stopping when at 90° (probably from motors coasting)
  • Will use PID from ~85° to turn with high accuracy
  • Will need to sync motors to turn in place (currently not turning around center because motors are not synced)

Assembling circuit board v1.1

  • Changed resistor on light sensor to increase dynamic range (see commit bcadda5)
Robot Back
Robot Top
* Encoder waveforms are clean (as before) Encoder waveforms
* Motor waveforms are much cleaner than before: Motor + waveforms Motor - waveforms
* Servo works Servo waveforms
  • Sync was behaving erratically, sometimes doing random "turns" for no reason
  • Replacing encoder disk solves problems
  • Might need protection for encoder disk so that it doesn't get disturbed again

Designed mounts for all sensors

  • Made of multiple components that can print with minimal height to reduce print times
  • Components are then glued together to make mount and frame

Issues

  • Short range and long range mounts too clusted together:

  • Long-range mount needs to move backwards ~1cm

  • Temperature sensor is obscured by short range IR next to it (mount needs to move backwards)

  • Short range spacing block too thick
  • Short range connecter slot too low
Long Range IR Mount
Short Range IR Mount
Temperature Sensore Mount
Frame (mount holder)
Frame legs
Full sensor mount

Assembled Alex's circuit board (see Trello card)

  • One of the LED's was placed backwards
  • Less solder paste on board made cleanup much easier

High-level sensor API's (see Trello card)

  • Created class abstraction for robot
  • Has methods for all peripherals (motors, sensors, etc. except ping, which will get implemented when it is known that it will be used) list for stupid ppl:

  • ADC sensors (IR's and floor light sensor)

  • MPU
  • GPIO (buttons and LED's)
  • PWM input (temperature sensors)
  • Servo control

  • All have been tested except for servo (probably working) and PWM in (probably not working)

Installed Linux on Ethan's laptop (see Trello card)

Calibrated sensors (see Trello card)

  • Created short-ranged calibration curve (actually line), have yet to test
  • Long ranged sensor readings don't seem to line up with what they should be (as per the specs)

Some (probably bad) readings + curves:

Short-ranged IR graph Long-ranged IR graph

Moving distances (currently via encoders only - see Trello card)

  • Currently using synced encoder counts because still waiting on finished sensor calibration to use IR sensor distances
  • Had issues with other testing code running that was also resetting the encoder counts

Porting algorithm to robot

Hard faults with sp (stack pointer) pointing to bad addresses on stack

  • Changed from using vector to array (fixed size)

Exception handling (try/catch) being ignored

  • Re-implemented code to not use caught exceptions

Algorithm is successfully ported and can navigate maze

  • Exception is thrown in some cases (not sure why yet)
  • Turn does not work properly when crossing between -180° and +180°
  • Turn syncing is probably broken

Robot code migrated to git

All work is now not done on Dropbox copy - Dropbox is just a git remote now for sources

Software

Fixed issue with turning across -180° and 180°

  • Normalized all angles to be between -180° and 180° (e.g. 350° would become -10°)
  • Turns seem to be not as accurate (one side turns a little too much), but not definitive

Implemented and tested IR temperature conversion

  • Now doesn't use timer interrupts

Tested servo code

Hardware

Rearranged sensor mounts

see commits d6b9334:
Sensor mounts

Thickened dropper wall and added guides for mounting IR sensors

see ff2652d:
Dropper

Hardware

Moved servo position in dropper (see commit f377079)

  • Servo was originally too close to the back of the dropper - the "kicker" was too close to the back
  • Kicker can move freely after moving servo out by 0.3mms

Noise on long-ranged sensors

  • Signal instability on long-ranged sensors caused by EMF interference with electronics
  • Shielding sensor with grounded aluminum foil seems to eliminate noise (at least on an Arduino Due)

Software

IR distance averaging

  • IR data is now gathered on SysTick and collected in a round-robin for data averaging and other filtering
  • Long-range IR collects 20 data points and takes the average of the lowest 10
  • Short-range IR collects (for now) 128 data points and averages them all

Changes to moving API

  • Moving is now done asynchronously - updates done in SysTick
  • Can abort move and get current distance moved / degrees turned

Victim detection and kit dropping (still WIP)

  • Can detect victims while moving between cells
  • Implemented ignoring visited cells (have not tested)

  • Seems to be crashing algorithm at dead ends (don't know how)

Finished installing / fixing dev environment on Alex's laptop

Software

Sensor API changes

  • Sensors are classes that do their own data processing
  • Robot class feeds sensor class data
  • Distance conversion implemented using sensor classes to provide fit models
  • See commit adcc2a3

Alignment

  • Wrote preliminary alignment code to align with walls
  • See commit a2d86aa
Improvements
  • Ignore turns < ~3° because those turns might not be accurate
  • Decide if a wall is useable (ex. the "wall" includes part of an obstacle)
  • Implement cell alignment in algorithm

Hardware

Optical interference between long and short range IR

  • Beams from front-facing short ranged IR interferes with measurements from long-ranged IR
  • Raising long-ranged IR by 8 cm does not solve problem
  • Probably causes bump in measured calibration curve
  • For now, not using long-ranged IR sensor
  • If necessary, long-range can face backwards

Sensor calibration

  • Calibrated sensors on Ethan's robot

X axis is raw reading, Y is distance in cm:

Software

Ramp

  • Implemented ramp algorithm (see commit 1960ba6)
  • Traveling ramp is not aligned, and may collide with walls
  • Need some sort of active alignment while traversing ramp
  • Top / bottom of just-discovered tile is not properly marked as visited

Alignment improvements

  • Wait for fresh data before aligning
  • Correct distance from front wall if there is one in front
  • Aligns called before every movement function
  • See commits fa4721c and 9c7abfe

Black tile

  • Implemented black tile and solved merge conflicts
  • Bugfixes all around:

  • Previously was marking the wrong tile as black

  • Ignoring black tiles in pathing didn't always work

Hardware

Battery terminal reinforcement

  • Reinforced with shrink-wrap to prevent terminals from being pulled out of socket

LED Light

  • Bright round white LED created for victim signalling
  • Still needs to be soldered to robot
  • Uses 105Ω current-limiting resistor

Software

Ramp Alignment

  • Uses PID while on ramp to stay centered
  • Ignores PID when sensor readings are not vaild
  • See commits 0cd5373 and c0a045f

Black tile rejection

  • Sensor must read black for at least 100ms to count
  • See commit 8cb56bc

Victim

  • Victim detection during turns in dead ends

  • Check "front" wall and "left" corner (not "right")

  • Threshold-based victim detection at the end of a move

  • Only occurs if there is a wall in front

  • Fixes to rejection while moving

  • See commits e6e1d63 and fb4dc86
  • Need to fix when moving away from wall to drop, turn, drop, and move away (don't move back or move before dropping)

Flash

  • Implemented library for writing to flash
  • Implemented saving maze data - not yet working
  • Implemented saving on silver tile (not yet committed)
  • See commits 05a56dd, ec3eca6, 543e8f2, and b014ab6
  • Still need to implement saving on button press but dropping current cell

Hardware

Motor problems

  • Motors were behaving erratically with low power output in turns
  • Eventually one motor did not turn while other spun ~90%
  • Caused by bad solder joint on MCU pins for encoder lines

Light sensors

  • Current light sensor (OPB733TR) used for detecting black/white/silver uses IR light, so some "black" surfaces (like some construction paper) that reflect IR light appear as white
  • Decided to replace with visible light based detecting:
    • Use lensed phototransistor and LED to reduce interference
    • Pick VEMT2520X01 phototransistor w/ 30° viewing angle because has 60% sensitivity to red (peaks in infrared ~800nm)
    • Use bright red (~640nm) 20° angle LED - APD3224SURCK-F01

Motor issues

  • At Maker Faire, noticed that robot was struggling to turn - not enough power from motors - same problem as at RCj competition
  • Replaced main power cable running from switch into board with wire of thicker gauge
  • This appears to solve problem - much more motor power than before

Create ping sensor mount

  • Attempt to take measurements from sensor - probably inaccurate

Ping sensor mounts

  • Printed mount - screw holes are out of alignment and pin slot is too small
  • Re-measure everything - pins are 0.1" pitch not 2mm! - find that transducers are probably also out of alignment

Hardware

Chassis

  • printed with PLA (on Robo3D)
    • board cavity slightly smaller than should be - probably shrunk

Software

  • merged calibration

Hardware

Floor light sensor

  • replaced floor LED and resistors (R9: 24 R11: 470)
    • higher saturation of phototransistor for minimizing interference

Sensor mounts

  • integrated ping sensor mount into main sensor mount frame

Hardware

Temperature sensor mounts

  • Moved holes to better fit lens

Hardware

Cover & OLEDS

  • Received new OLED breakouts with reset pin - 1 more pin than before
  • Updated slot in cover to match size

Ping mount

  • Mount made symmetrical vertically
  • Side that is not stuck into sensor mount frame shortened
  • Sensor can now be flipped without needing mount to be flipped

Software

Algorithm refactoring

  • Algorithm data structure refactored to use symbolic access (read: no magic numbers and easier to read & modify)
  • Saving using serialization methods of data structure to dump into vector for writing into cold storage (read: make platform-agnostic)
  • Ongoing attempt to make code portable between robot and simulation (read: copy+paste-able)
  • Walls can now be marked as "Unknown"
  • Cells with all walls known to be nonexistent are marked as visited

Other simulation-related fixes

  • Rectangular mazes now supported correctly
  • Reading data now always uses istream.get() - fixes odd read bugs

Hardware

Assembled 3rd circuit board

Tested:

  • Power supply
  • MCU
  • Debug interface
  • Floor light sensor
  • IR sensor analog inputs
  • OLED SPI interface

Needs testing:

  • Temperature sensor PWM inputs
  • Drivetrain
  • Servo PWM out
  • Serial port

Software

Refactoring save / resume

  • Refactored to dump vectors of data to a hardware-defined function to save / resume - no direct dependency on flash libs
  • Tested ramps, black tiles, silver tiles (saving & resuming) - fixed bugs

Hardware

Motors

  • assembled motors
  • tested motors and drivers: OK

Motor waveforms

Encoders

  • tested - found two Hall effect sensors with bad soldering (fixed)

Encoder waveforms

Servo

works ¯\_(ツ)_/¯

Software

  • Added some status updates on OLED during init
  • Created Print-compatible U(S)ART wrapper
  • Started logging system:
    • currently dumps to OLED w/ no line wrap (output is invisible after 8th line) - needs to be implemented
    • needs serial dump
    • needs to actually be used

Software

Logging

  • Implemented logging to multiple outputs (serial & OLED) w/ generic interface

Hardware

Motors

  • Cleaned and lubricated original copper-brushed motors on robot #1 (blue)
    • Motors now run more smoothly

Sensor mounts

  • Assembled new sensor mounts & transferred sensors to new mounts
    • Sensors from #1 (blue) moved to #3 (magenta)
  • Added ping sensor to #2 (white) and #3 (magenta)
    • Ping readings are not always reliable (get readings of ~5.8cm)
      • Solution: add fuzzy shielding to reduce viewing angle of pings

Other / Misc

  • OLED on #2 (white) has reset pin - use cover with expanded slot
    • OLED needs cycle of reset line to start - implemented reset functionality in SSD1306 driver libs
  • Victim indicator LED removed
    • Now uses OLED to indicate victim (display VICTIM & flash display)
  • Wedges added to all robots (for protection and traversing bumps)

Software

  • SSD1306 reset
  • Ping support
  • Use OLED for victim indication

Software

  • Silver detection now looks at past values
    • Is silver if above certain percentage of readings were above threshold
    • Solves issues posed by wrinkles in silver (inconsistent readings)
  • Added silver values to calibration
  • Added option to clear saved maze data
  • Finally merged cell alignment

Software

Moving straight

  • Use MPU (AHRS) to maintain heading while moving straight
    • Heading PID fed as bias to motor sync PID (to get robot to curve)
    • PID may need more tuning (small overshoot) but reacts quickly
    • Is not guaranteed to always face correct heading upon stop
  • Sync PID now uses (log of) ratio between motor speeds instead of difference as error
    • appears very smooth at normal speeds
    • behaves poorly with very low speeds
      • plan to add speed target PID & disable sync PID if speed too low

Software

Speed PID

  • PID targeting constant speed (calculated as motor power × scale factor)
    • maintains speed well - tested by going up and down ramp (same speed as flat ground)
    • I rise time is negligible
    • requires tuning - currently speed is oscillating
    • occasionally runs full speed for no apparent reason - persists between calls but not between resets
      • debug log shows correction increasing rapidly and one encoder value at 0

Robot (related stuff)

PIDs

  • PID controller for motor speed:
    • Target encoder speed set as power level × scalar
    • PID fed error from target encoder speed
    • PID output is change in motor power (i.e. is accumulated)

Development (related stuff)

GitLab!

  • Moved repo onto GitLab
    • Issue tracker now being used instead of Trello

CMake!

  • Configured CMake build to remove dependence on Eclipse for build

Software

Movement

  • Implemented API to pass custom move info objects to move code
    • objects instruct on additional corrections and when to stop
    • goal is to implement ramp using this API and unify all movement
  • Discovered beings-rcjb/robot#12 - PID speed control sporadically goes to and stays at max power
  • Custom movement API (beings-rcjb/robot!6)
  • Re-implemented ramp using movement API

    • PID control hierarchy:

    PID controller flowchart

Software

  • Fixed issue beings-rcjb/robot#12
    • Summary: move power increases indefinitely (full throttle) after robot is on for more than 65.535s due to overflow in millisecond timer - solved by looking at the difference between timestamps
  • Updated calibrations for right side IR sensors for purple robot

    Right front calibration Right back calibration

  • Installed and tested new temperature sensors on white robot
  • Tested victim detection using heat pads - OK

Software

Algorithm refactor into pseudo-singleton object (#17)

  • Algorithm made into object keeping state about what movements (current cell, moving to, etc.) are happening
    • Move functions use this state to make decisions on whether or not to detect for victims
      • Allows for victim detection to work on turns (for victims in corners)

Confirmation for clearing maze data (#4)

Fixed calls to align after dropping victim (#16)

Software

Closed issues:

#21 - Walls being placed behind robot

  • Force remove wall behind robot after move

#22 - Align to center of next tile using 1 wall

Raspberry Pi (visual/CV)

  • Built and installed OpenCV (and python bindings)
    • Installed python into virtualenv - however OpenCV installs system wide in install task anyways so probably better not to bother with virtualenvs for prod. PiZero
  • Installed raspbian-lite, OpenCV (w/ python bindings), and minimal Fluxbox "desktop" for Pi Zero
  • Imaged card & compressed image down to <4GiB
    • process:
      1. ext4 fs shrink
      2. resize partition down to fs size
      3. re-dd image chopping off unused space
    • flashed second card with image - all installed programs/libs work

Wall detection overhaul #11

  • Wrote up cases for wall detection on all side using all sensors (on issue tracker)
  • Implemented cases for front detection
    • Unit-tested with gtest with code copied to separate test program (a.k.a. less than ideal, but works)

Wall detection (#11)

  • Implemented wall detection on sides
  • Verify sensor readings before turning to check (!20)

Ramp

  • Prevent robot from tipping by adjusting power based on pitch (#30 !22)
  • Independent moving distances after end of ramp (!25)

Codestyle (#28 !19)

  • Uncrustify config created and enforced for code style
  • config enforced in git pre-commit hook

GDB load script

  • Automate building and loading code over debugger (OpenOCD + GDB) with script

Merged:

  • Wall detection improvements (!18)
  • Cleanup / Codestyle (!19)
  • Replace incorrectly-refactored ramp value rejection threshold (!21)
  • Verify sensor readings before turning to check (!20)
  • Custom movement integrations (!6)
  • De-magic-ify (!11)
  • Flush display contents after init start (!23)
  • Disable checking on turn when not necessary (!24)
  • Independent moving distances after end of ramp (!25)

Tested but not merged (yet):

  • Silver detection using past values (!2)

Software

Clean values (#25 !27)

  • Make sure sensor values are cleaned from effects of movement on readings (ex. clean IR round robins, update ping readings)

Silver checking using past values (!2)

  • Shortened checking interval to not include silver from previous tile while moving
  • Merged

Obstacle avoidance (#33 !28)

  • If obstacle is detected, turn away until not visible before moving forward

Hardware

Bumper

  • Redesigned front with a larger sloping edge to allow robot to slide along obstacles New bumper

Pi / Optical Victim

  • Testing of algorithm
    • Processing at ~3-4 fps
    • Most of time (0.1s measured per frame) spent running floodfill
    • Get ~10 fps dumping camera frames to screen (and overlaying FPS number)

Obstacle Avoidance (#33 !28)

  • Make sure when avoiding obstacle, robot travels correct distance and does not hit wall
  • Merged

Bumper improvements

  • Moved edge of front bumper fillet forward (c2737bce)
  • Increased rounding on back of bumper (0e7f36c4)

Bumper top

Hardware - Camera mount (9bf5365)

  • Created camera and mirror mount to attach Pi camera to robot
    • Mirror mount attached with screws to have adjustable height
    • Mirrors angled at 30° from horizontal
  • "Bridge" structure to attach camera mount to tops of front side facing IR sensors

Camera mount 1 Camera mount 2 Camera mount 3

Software - Pi

Shutdown on a switch

  • Wrote python script (and systemd units) to shut down Pi when a button attached to GPIO is held

Software

Fixed tick types (#41 !31)

Double dropping fixed (#42 !32)

  • Fixed incorrect setting of flags
  • Added flag to tell if source cell of move straight is new or not
  • Fixed initialization of flags to operate correctly on initial turn

Hardware

Slope of front wedge decreased (#4)

Pi / STM interfacing (visual victims) (817ebd3-32fb489)

  • Improved ADC detection code
  • Added display for received data
  • Yet to test

MEventCpp

  • CMake builds configured (hopefully final)
  • Tags implemented and tested
  • Parameterized test fixture written for lookup tree interface

Hardware

New dropper

  • Designed new dropper to dispense to both sides of robot - no need to turn
  • Spring loaded so rescue kits come out top - necessary height for ramps without needing to mount the dropper up high (makes very compact) dropper

Software

TOF Sensors (VL53L0X-cpp)

  • Wrapped ST's VL53L0X library with a C++ interface (for public API methods)
  • Fixed init issue where ST's VL53L0X_DEV must be filled with 0xff (undocumented)

Hardware

New mounts

  • Mounts for TOF sensors
  • Redesigned frame for front set of sensors

frame

Software

MEventCpp

  • Draft implementation written - crashing due to heap issues?
  • Decided to drop for simpler approach

Software

Refactoring

  • Restructured calls between algorithm and hardware layer to resemble goals for event loop system
  • Algorithm now recomputes per cell (#46)
  • Implemented checking continuously while turning
  • Implemented handling for visual victims

Hardware

Pi mounting

  • Created Pi mount to attach Pi to handle
  • Moved MPU to screwed underneath cover to make space for Pi
  • Handle middle "bar" moved upwards to make space for Pi

Software

EEPROM

Implement...

  • returning home after certain number of restarts
  • alternating biases every restart
  • counting of dropped kits
  • returning when all kits have been dropped

Software

FreeRTOS

  • Pull FreeRTOS and freertos-addons into project
  • Wrap main() in task and create shim SystemTick() (for HAL) to get project up and running
  • Create task for MPU periodic polling actions

Software

FreeRTOS

  • Moved Timeout implementation to use FreeRTOS tasks
  • Move ToF polling to a task
  • Fix ping inconsistency issue by ensuring turning off of ping pulse is at a high priority

NanoPi setup

Goals

  • Use Docker to deploy code in snapshottable containers
  • Use resin.io / Resin Supervisor to remotely manage/deploy from development machines
  • Utilize a serial link for 2 way frame-based communication between the Pi and the STM32F4

ResinOS/Yocto Adventures

  • Found resin-allwinner - still incomplete/supposedly being worked on
  • Attempts to patch:
    • Could not get resin-allwinner (from fresh clone) to build on my Arch system
    • Copying driver blobs onto SD card did not seem to do anything
    • Possibly missing patch to root device tree overlay for enabling WiFi device but did not get to test

Armbian mainline

  • Armbian distro has patched mainline device trees to include WiFi/BT
  • No patches for USB OTG
    • Attempt to create overlay adding in usbphy, OTG controller, and ID pins
      • Overlay fails to load with an invalid address (?), and causes boot script to also remove all other overlays
    • Patch base overlay from mainline (torvalds/linux):
      • Grab and apply patches from Armbian tree, then OTG changes
      • Build (DTC_FLAGS=-@ ARCH=arm) on laptop and copy into device trees on the SD card
      • Caveat: must recopy/rebuild device tree for every kernel update

Containers

  • Docker installed without incident
  • Resin supervisor:
    • Container will execute, have not spent time to create configuration / start script based on ResinOS/meta-resin for mounts/volumes/etc

Embedded Linux ARM Tooling

dtbotool

  • Simple tool for dynamically loading device tree overlays with configfs
  • Useful for debugging overlays and devices during initial setup (serials)

ncm-gadget

  • Standard modprobe g_ether (ACM) appears not to work with newer usbnet drivers
    • host device refuses to pick up ethernet address correctly and can't communicate
  • gadgetfs to the rescue!
    • Created ncm-gadget script to automate setting up a NCM only gadget following this presentation (as a dumb shell script)
    • Handle setting up and tearing down gadget on boot and shutdown via systemd

FreeRTOS

  • Moved all periodic functions (that were on SystemTick with counters) into tasks (scheduled using Timeout)

Website

  • Moved back into git repository
  • Draw plans for migration to better build system - Gatsby + React

Embedded Linux Software Stack - microservices!

Serial bus interface

  • All communication between the main board (F4) and the Linux system (now NanoPi) happens over an asynchronous serial link
  • Multiple "applications" (ex. visual victims) run on both sides, so some form of addressing system is needed
  • Concept of a packet with fixed size necessary to work properly asynchronously
  • Create a framing and addressing system based off of HDLC
    • Use frame structure (asynchronous) as is, use first byte of packet as an address, and have no checksum
  • Create a HTTP API server to interface between "applications" running on Linux side and the serial link - basically a packet "socket" over an HTTP API
    • Use Python + Flask and requests - Python is already a dependency, so this allows for sharing image layers with Docker deploys versus pulling in ex. Node images.
    • Basic HTTP POST endpoint for sending on an address
    • Subscription based receiving API - an application registers its own API endpoint for the server to POST data to when something is received
      • Currently limited to one subscriber per address and explicit unregistering by application; as a result, if any one (receiving) application crashes that application won't work until everything starting from the serial interface is restarted

Docker images for ARM

  • ResinOS provides many different ARM images, all with cross-building conveniently set up (QEMU with execve traps to make subprocesses also run with QEMU, and scripts to swap sh around with a wrapper calling QEMU)

OpenCV

  • Following a standard OpenCV install guide for Raspberry Pi for most of build (CMake config, etc) - for simplicity, compile under ARM system with emulator instead of trying to set up true cross build
  • In attempt to save disk space, first install the "real" library packages that the various *-dev packages depend on and install only those (not the actual header files) in the finished image with a multi-stage build (installing and then removing headers would still create the intermediate layer with headers)
    • Create list by manually running through Debian package database for Jessie (release that Resin's Python images are based off of) - so far everything used works, but some packages might still be missing
  • Manually enable certain CPU optimizations (VFPv3 and NEON) because they are not supported by QEMU and not autodetected but present on real targets (and offer large performance gains)
    • As a result OpenCV can't be run with QEMU because it complains about lacking NEON support

Docker images for ARM

shutdownd

  • New shutdown daemon implementation using serial HTTP infrastructure

Serial HTTP

  • Use thread pool for sending received bytes off to applications to not block main server thread

Misc

  • Dockerfiles for all the (existing) things! Create templated Dockerfile generation system similar to what Resin.io uses for creating the build variants for all of their different devices - In hindsight, using Docker's built-in build args would have been better, but this approach will work for older versions of Docker

Docker images for ARM

Visual Victims

  • Dockerify it! Now that images are made, let's integrate it into the rest of the stack
    • OutpSerialHttp and related options (commit)

Deployment time - environment setup scripts

  • Create repository of all Pi config files
  • Container images transferred over USB network connection with laptop and a local registry running on laptop
  • systemd scripts to run all containers
    • Container is started (then detached) with ExecStartPre and attached to with ExecStart - this way systemd knows when the container is "up" (after ExecStartPre scripts have all finished - type simple considers service to be "up" as soon as ExecStart is called) and knows when it dies / exits (docker attach forwards exit codes) (and as useless bonus, also logs output alongside Docker)
  • For some reason Docker on ARM doesn't respect --network=host, so shutdownd can't run in a container (because then it can't actually shut down the system, which is intended behavior for isolation, but the main method of circumventing this restriction is broken)
  • Solution: run shutdownd locally as a normal user - install script creates daemon user with home directory (containing sources and virtualenv), sets up PipEnv dependencies and PolKit rules for allowing shutdown, creates systemd unit which runs it with PipEnv, and uses git to update sources
  • This requires some fun to get to work properly with the serial HTTP service (running in a Docker container):
    • IP address of the HTTP bridge is acquired at runtime (a.k.a. unit start time) with docker network inspect
    • "Self" address for shutdownd (because it is a receiving endpoint) is also similarly acquired by looking at the gateway of the network the containers run on

OpenCV Images

  • Start work on automating OpenCV builds via GitLab CI so there is a canonical repository / source of truth for other images (like visual victims) to pull from (currently using already-built images on the development machine)

Robot side of serial

  • Created and tested robot side send and receive for serial protocol
  • Sending stuffing implemented with copy and insert into vector
  • Receiving implemented by interrupt per byte and state machine for unstuffing
  • Calibration menu option for sending shutdown signal (no function b/c it is sending a magic string, nothing fancy)
  • Modified camera interface to use receiving
  • NanoPi is small enough to fit in the front of the robot, on top of the current wedge
  • Start redesign to fit webcam on top of Pi sitting in front, with mirror assembly looking down "over" front side-facing sensors
  • Remodeled frame and sensor mounts to fit size constraints:
    • Side-facing sensors must be short enough to not obstruct camera FOV
    • Front-facing sensors must go over the wheels - extend outwards from the frame
  • Forward facing sensors moved behind temperature sensors to have more clearance with wheels
  • Side facing sensors moved forwards to compensate for larger wheel size (so it measures closer to the front of the robot)

New sensor mounts

Hardware

Created mounts for:

  • NanoPi:
    • Designed to be slotted in like "rack-mount"
    • Sandwiches NanoPi board between boards with wings and protrusions to mount in slots
  • Frame front support:
    • Supports frame at front between it and chassis deck - prevents sagging
    • Has slots for placing NanoPi mount
  • Webcam:
    • Tray style mount for webcam
  • Mirror:
    • Mounted to top cover plate via dimple and screw hole

Mirror with NanoPi mount Mirror mount Front support Camera mount top Camera mount bottom NanoPi mount full NanoPi mount halves

Hardware

  • Added slot for webcam data cable in mount
  • Redesigned bottom NanoPi mount plate to have cutouts for routing out of power and data cables (leave enough space for the cables to clear board components and exit out the back)
  • Cut slots in wedge to allow for NanoPi mount to sit flush (screws would intersect with wedge)
  • Cut slots in frame to allow for NanoPi mount screws to pass through when it slides all the way back

Camera mount NanoPi Bottom Wedge Slot in frame

Hardware

  • Shorten lip of webcam mount
  • Add blocks to stabilize support between webcam mount and Pi mount
  • Fix height of webcam mount above Pi mount

Camera mount support

Software - Docker images

  • Split out NumPy installation into separate image - it's a one-command build that will save time in OpenCV builds

Hardware

  • Mirror wraps around back of cover to stay on more securely - slots cut out for various screws in the way
  • Fixed slot positions on Pi mount
  • Fixed size of holes in Pi mount top plate

Mirror mount

Hardware

  • New bumper design to cover large omniwheels
    • Use sensor frame as additional support point
    • Screwed down into handle for support
  • New handle with wing to support bumper

Top view of bumper Handle

Embedded Linux

ROS

  • Previous tests with HTTP based system (Flask + requests) show that under high volume (e.g. a letter staying in the camera FOV) the system as a whole uses ~40% CPU
  • ROS at its core is a message middleware layer, leveraging a central master to negotiate P2P TCP sockets in a pub-sub style network
    • Has potential to solve startup configuration issues (with registration) and performance issues together
    • Not supported by Resin (yet), so some images need to be made…
  • Serial bridge can be implemented by publishing on topics (e.g. /serial0/rx/1 - address 1) channel for data received, and subscribing to topics to send
    • ROS apparently can only support 512 objects (which include topics and nodes), so we'll have to reduce the address space or fiddle with ROS master configs - since as of right now we only are using a grand total of 2 "applications", we chose the former
    • Testing the new system shows negligible CPU usage by ROS
    • However, we do see a mysterious docker-initc using up ~100% CPU, but per-core loads don't reflect this usage - all are around 25%, likely just the robot code stack

Now, back to those images…

Images, images, and more images!

  • Docker library has ROS, and ROS for armv7 (arm32v7/ros)
  • However, no QEMU or cross build - let's fix it!
  • No Pip either - let's borrow Resin's Pip install process from their Python images (and do it for amd64)
    • Trying to install and run anything with this shows that dependencies are missing - Docker has invented buildpack-deps, so pull the build for that as well
  • Now we need OpenCV - let's replicate the same build process as before (NumPy into OpenCV)
    • Out of laziness, for now installing header packages in both builder and final image instead of finding library versions

Don't forget automation - fiddling with CI

  • Many of these images are built for multiple architectures (amd64 and armv7hf) from (almost) the same Dockerfiles - only the FROM stanza changes
  • So, create a script to build and tag for an appropriate arch and version - it's simple enough to copy+paste around repositories
  • For CI, use a pattern pulled from GitLab's own CI (for GitLab itself): parse the job "arguments" (in this case architectures and versions) from the job name itself, allowing efficient use of YAML mixins to make (almost) one line job definitions
  • Because of our custom Dockerfile tooling, we need to build the Docker images on a "general purpose" CI like GitLab CI because typical Docker build services will only build a specific Dockerfile in the repository (and don't support build args either). So, we need to build on CI, then push to a registry (from CI)
  • DockerHub doesn't support authenticating by any method other than your own user+password, which is unacceptable for CI. Browsing around for alternative registries turns up quay.io, which has excellent support for creating "robot" authentication credentials - very simple to set up and use
  • Now we have canonical images to reference from Dockerfiles! Huzzah!

Embedded Linux - Vision

  • Implemented setting of V4L2 camera parameters through YAML config file
  • Testing: turns out camera exposure controls (through setting AE mode to Aperture Priority and using exposure_absolute) cannot get as low of an exposure / high of a framerate as auto

Cleanup

  • Merge open MRs for code that's been sitting on branches being actively used
    • visual-victims/v4l-configurations
    • robot/new-robot
  • Properly cherry-picked old commits from Nagoya (and elsewhere) onto master:
    • 381f6902 Do not detect bumps as silver
    • 4a4974f1 Display when saving and on black tile
    • ed9aaa7e Never turn at end of move if on ramp
  • Rebase old branches / patches / MRs on new master

Ramp

If the robot is stuck on something (typically bump while going up ramp), the ramp feedback loop will continue to increase correction because it assumes the robot is always facing the direction that the correction targets (e.g. the underlying heading PID is perfect). In most cases this isn't a problem, but here the correction keeps increasing until the robot gets unstuck, which can take significant time. That likely will cause the robot to hit the wall or turn the wrong way.

  • Implemented API change to feed up current heading error
  • Use this error in accounting for alignment in ramp PID - cuts out one source of integrated error (other - centering - is small enough to be insignificant)

Robot bugs

  • Fixed bug with black tile handling edge case

Python code

  • Refactored serial code, pulling pyserial related code into own packages
  • Packages are included in dependent packages by git submodule and editable install (utilizing a setup.py in Pipenv's virtualenv)
  • Implemented serial bridge (to use Pi as a serial passthrough) utilizing refactored timeout ReaderThread implementation