Artificial IntelligenceNatural Language ProcessingRoboticsComputer VisionProgramming
Event Type
in person
221
Participants
₹30,000
Prize Pool
19
Est. Projects
Organizers
Alex Johnson
alex@example.org
Jamie Rivera
jamie@example.org
TechnoMania 2.0 brings you a cutting-edge 24-hour offline coding marathon that pushes the boundaries of standard hackathons. Unlike traditional open-ended challenges, this edition focuses on "Integrated Autonomy & Deep Learning."
Participants will move beyond simple computer vision models to build a complete Specialized Autonomous Pipeline. Leveraging Duality AI’s Falcon Digital Twin Platform, teams will utilize high-fidelity synthetic data to solve complex problems in off-road environments.
The Theme: Offroad Semantic Scene Segmentation & Autonomy
The core objective is to develop robust AI models capable of navigating unstructured, difficult-to-access environments (like deserts). Teams will train their models using synthetic data generated by Falcon Cloud and test them against novel environment shifts.
The Workflow: Every team begins with a mandatory Foundation Phase and then splits into a specialized Elective Track to finalize their prototype.
The Challenge Tracks & Problem Statements:
Phase 1: The Foundation (Mandatory for All)
Problem Statement: Offroad Semantic Scene Segmentation All teams must first build a robust semantic segmentation model capable of identifying navigable terrain versus obstacles in a desert environment.
Goal: Achieve high accuracy in segmenting terrain types using Duality’s annotated synthetic datasets.
Outcome: A functional vision model that serves as the "eyes" of your autonomous agent.
Phase 2: The Elective Tracks (Choose One)
Once the foundation is set, teams must innovate by integrating one of the following advanced capabilities:
Track 1: Natural Language Processing (NLP)
Focus: Scene Reasoning & Explainability
Problem Statement: Bridge the gap between vision and language. Your system shouldn't just "see" pixels; it must understand them.
Task: Develop a module that generates human-readable text summaries of the scene (e.g., "Obstacle detected 5 meters ahead," "Terrain shifting to soft sand").
Track 2: Advanced Computer Vision
Focus: Path Planning & Navigation
Problem Statement: Vision is useless without action. The agent needs to know where to go.
Task: Implement algorithms for autonomous path planning and dynamic obstacle avoidance based on the segmentation output. The agent must calculate the safest route through the terrain.
Track 3: Generative AI
Focus: Domain Robustness & Edge Cases
Problem Statement: Real-world data is messy. Models often fail in "edge cases" (rare scenarios) that are hard to capture.
Task: Use GenAI techniques to synthesize novel "edge case" scenarios (e.g., sandstorms, rare rock formations) to retrain and stress-test the model for superior robustness.
Track 4: Speech-to-Text / Voice UI
Focus: Human-Machine Interaction
Problem Statement: In field operations, operators need hands-free control over autonomous systems.
Task: Build a real-time Voice User Interface (VUI) that allows operators to issue verbal commands (e.g., "Stop," "Return to base," "Scan sector") which the system interprets and executes.
Detailed Guidelines & Resources:
For a complete breakdown of the dataset structure, evaluation metrics, and technical resources for the Falcon Platform, please refer to the official problem statement document.