One STOP SOLUTIONS FOR TRUCK MOVEMENT, CONTAINER TRACKING
RFID Solutions
Ensures movement logging.
1. Yard Readers (Fixed Antennas)
Mounted on high poles or light towers across the yard.
Antennas angled down to cover each block.
Used for slot confirmation of stacked containers.
2. Mobile Readers (on Reach Stackers/RTGs/Drones)
Mounted on equipment for real-time location updates during moves.
Drones with RFID readers can fly over stacks to verify IDs.
3. Middleware / Server
Collects data from all readers.
Filters duplicate reads.
Integrates with Yard Management System (YMS).
Maintains container slot map (row, bay, tier).
4. Software / Dashboard
Real-time yard map – shows exact location & stack.
Container search – find by ID in seconds.
History logs – all moves, gate-ins/outs, dwell time.
Alerts – wrong placement, misread, unauthorized movement.
🔹 Placement Strategy (360m × 40m Yard Example)
Divide yard into blocks (rows × bays).
Install RFID reader poles every ~40–50m with 4–6 directional antennas each.
Ensure overlapping coverage to avoid blind spots.
Gate UHF readers with 4 antennas each (2 lanes in + 2 lanes out).
Optional: Equip reach stackers with RFID readers + GPS for event-driven updates.
🔹 Workflow
Gate Entry – Container tagged & scanned at gate → ID + truck logged.
Yard Placement – Container placed in block/slot. Mobile reader (on equipment) or fixed readers detect tag → update database.
Stack Update – If container moved, new location auto-updated.
Gate Exit – Container scanned again when leaving → exit recorded.
🔹 Benefits
✅ Automated location accuracy → no manual searching.
✅ Reduced truck idle time → faster container retrieval.
✅ Lower labor costs for yard inventory.
✅ Audit trail for compliance & billing.
✅ Future integration with OCR cameras, GPS, or IoT sensors.
OCR - Solutions
AI-driven system that can automatically recognise the container number on the top and side walls of the container through a camera. The images taken by the camera are being evaluated by optical character recognition (OCR) software. Then the container information is stored in the database, thereby eliminating the need for manual intervention in the container number.
OCR-based container tracking system you can deploy on your yard that reliably reads container IDs from gates, top/side of stacks (drone or mast cameras), and handheld audits
A short overview of the system and where OCR fits.
A robust, end-to-end pipeline (capture → OCR → validation → integration).
Model & algorithm recommendations, plus preprocessing and augmentation tricks that actually help in the field.
Operational and deployment notes (edge vs cloud, latency, human-in-loop).
A practical pilot checklist and next steps you can action immediately.
1) Quick overview
Goal: automatically extract ISO container IDs (and optional other text like seal numbers) from images/video and map each ID to a yard slot/time in your TOS/WMS. The OCR module converts images → text, then a validation & reconciliation step matches text to container manifest and assigns location.
Data sources:
Gate camera(s) (primary — capture front/rear door shots)
Yard mast or fixed cameras (for aisles / stack faces)
Drones / mast cameras (for top/side of stacks)
Handheld devices for audits (worker with mobile app)
Primary constraints: varied lighting, angled views, rust/fading, dirt, occlusion, motion blur.
2) End-to-end pipeline (recommended architecture)
Capture
Triggered capture at gate (ANPR-style), or periodic capture from mast/drone.
Use short burst (3 frames) to reduce motion blur & allow best-frame selection.
Preprocessing (per image / per frame)
De-noise (bilateral or non-local means).
Contrast enhancement (CLAHE).
Perspective correction (if signboard plane can be estimated).
Adaptive thresholding for low-contrast plates.
Multi-frame stacking (if available) or select best frame via sharpness score.
Detection (text region detection)
Use an object detector to find container ID regions (door area, side panels).
Models: YOLOv8/YOLOv5, PP-YOLO, or CRAFT/DBNet for text boxes.
Output: bounding boxes (and rotated boxes) for candidate text.
Recognition (text recognition)
Crop detected regions, resize maintaining aspect, feed to recognizer.
Models: CRNN, Convolutional-CTC, or Transformer-based (TrOCR / Vision Transformer OCR).
Use lexicon constraints (only ISO container charset) to improve accuracy.
Postprocessing & validation
Normalize text: remove whitespace, replace ambiguous chars (0 ↔ O, 1 ↔ I/L).
Regex match for ISO pattern (4 letters + 6 digits + check digit).
Confidence scoring: combine detector + recognizer confidence; require threshold.
Optional check-digit validation (ISO 6346) to discount misreads. (If you want I can provide implementation code for check digit validation.)
Reconciliation & integration
Match OCR result to TOS manifest (exact or fuzzy match within confidence).
If unmatched, route to human-in-loop review (dashboard with image + suggested text).
Write final result (container ID, timestamp, geo/slot) to TOS/WMS and yard map.
Audit & feedback loop
Store false/low-confidence cases to retrain / fine-tune model.
Periodic model re-training with new yard images (seasonal/weathering changes).
3) Model & implementation recommendations
Detection
Start with a modern lightweight detector: YOLOv8n/YOLOv8s or Detectron2 text head.
If text regions are small / rotated, use DBNet / CRAFT for more robust text boxes.
Recognition
Two practical options:
CRNN + CTC (fast, reliable) — good starting point.
Transformer-based recognizer (TrOCR / Vision Transformer) — higher accuracy on difficult images, more compute.
Use a character whitelist: uppercase A–Z and digits 0–9 (plus allowed separators if any). Enforce ISO-format at decode time.
Architectures for production
Edge inference: Raspberry Pi + Coral/Jetson for gate cameras is feasible for low-latency. Use quantized YOLO + small CRNN.
Hybrid: edge prefilter (detect candidate regions) → send crops to central GPU server for recognition / validation. This reduces bandwidth and improves reliability.
Drone Technology
Drones (UAVs) fly over the yard, scanning stacked containers to capture ID numbers, positions, and status. This enables real-time visibility even in multi-tier stacks where RFID/OCR at ground level may miss units.Key Components.
1. Drone Hardware
Industrial drones (quadcopters or VTOL) with:
High-resolution camera (≥20 MP) for OCR of container IDs.
Thermal/IR option (for night ops or detecting reefer failures).
RTK-GPS or onboard SLAM for precise positioning (<10 cm).
Obstacle avoidance (LiDAR/ultrasonic sensors) to avoid cranes/stacks.
Flight time: 20–40 mins per battery (swap or tethered drone for continuous ops).
2. Payload Sensors
Optical camera for OCR of container numbers on doors & sides.
RFID/Beacon reader (optional) to detect UHF passive/active tags.
3D LiDAR/Photogrammetry to map stack heights and placement.
3. Software & Processing
OCR/AI Model:
Trained on container number fonts, weathered paint, angles.
Works from aerial side/top views.
Yard Mapping Engine:
Converts drone flight data into slot locations (mapped to yard grid).
Integrates with TOS (Terminal Operating System).
Digital Twin Dashboard:
Real-time yard view: stack heights, occupied slots, empty slots.
Historical playback (container movement timeline).
4. Flight Operations
Pre-planned grid flight paths (autonomous, geofenced to yard boundary).
Height profiles: 20–50 m altitude; lower passes (10–15 m) for OCR scans.
Frequency: 1–2 passes/day for inventory; on-demand for audits.
🔹 Workflow
Flight Planning
Drone follows waypoints to cover yard zones (Import, Export, Empty).Data Capture
Images & video of stacked containers.
RFID/Beacon pinging if tags installed.
Processing
Onboard AI (edge computing) or upload to server/cloud.
OCR extracts container IDs (ISO 6346 format).
Location assigned via RTK-GPS grid mapping.
Integration
Data sent to Yard Management/TOS.
Dashboard shows container ID, slot, stack level, timestamp.
🔹 Benefits
Covers stacks 3–5 high (where ground readers fail).
Fast inventory: scanned in ~10–15 minutes.
Reduces manpower & errors vs manual survey.
Detect anomalies: misplaced containers, damaged units, reefer alerts.