REF: BUILD_002 // STATUS: DEPLOYED

Assistive Tech @ Saksham

An exercise in sensor fusion and accessible design.

I. Hypothesis

The problem of navigation for the visually impaired is typically treated as a clear-path problem. However, the greater challenge is context awareness. A cane can detect a wall, but it cannot read a sign or detect a silent gesture. The hypothesis was that a multimodal system (Vision + Sonar) could offer a "semantic cane" experience.

II. Apparatus

III. Observation

The ultrasonic walking stick proved robust, providing reliable < 2m obstacle detection. However, the Smart Cap (Vision) faced significant latency issues on the Pi 4 when running simultaneous object detection and OCR.

We optimized the model using quantization, reducing inference time by 40%. The final system achieved 85% accuracy in real-time sign language translation under controlled lighting.

[ ARCHIVAL IMAGE: PROTOTYPE V1 ]
Fig 1. Wiring harness and sensor array.

IV. Conclusion

The project successfully secured ₹335,000 in funding, validating the market need. The "semantic" layer is viable but requires dedicated AI accelerators (like a Coral stick) for true real-time comfort.