This hackathons is only open to students. Double check the event page for more information as this may mean only those from a particular university/country are eligible.
Event Type
online
1,258
Participants
USD20,000
Prize Pool
113
Est. Projects
Organizers
Alex Johnson
alex@example.org
Jamie Rivera
jamie@example.org
Introduction:
Safe and well-maintained road infrastructure is the backbone of modern society, ensuring efficient transportation and economic activity. However, the manual inspection of vast road networks for defects like cracks and potholes is slow, costly, and often subjective. This challenge calls upon participants to leverage the power of Artificial Intelligence to automate this critical task.
Your mission is to develop a state-of-the-art computer vision model that can automatically detect and classify various types of road damage from images, paving the way for smarter, safer, and more resilient infrastructure.
Problem Statement & Objective:
The objective of this competition is to build an object detection model that can accurately identify and classify different types of damage on road surfaces.
Given an image of a road, your model must output the location (as a bounding box) and the specific type of each instance of damage present. The goal is to achieve the highest possible accuracy on a hidden test set, demonstrating the model's robustness and precision in a real-world scenario.
Dataset Details:
You will be working with the Road Damage Detection 2022 (RDD2022) dataset, a large-scale, multi-national collection of road images.
Contents: The dataset contains over 47,000 high-resolution images.
Data Split:
Training Set: images with corresponding labels.
Validation Set: images with corresponding labels.
Test Set: images without labels (labels are private for judging).
Damage Classes: Your model must classify damage into one of five categories:
Longitudinal Crack
Transverse Crack
Alligator Crack
Other Corruption
Pothole
Label Format: The provided labels are in the YOLO TXT format. Each .txt file corresponds to an image and contains one line per damage instance.
Submission Format:
For your final submission, you must run your trained model on the provided test images and generate a prediction file for each image.
Prediction Files: For each image in the test set (e.g., test_image_001.jpg), you must create a corresponding text file (test_image_001.txt).
Content: Each line in your prediction file must represent a single detected object and contain six space-separated values:
Packaging: All your prediction .txt files must be placed in a single folder named predictions and compressed into a .zip file (e.g., submission.zip).
Evaluation Criteria:
Submissions will be ranked based on the Mean Average Precision (mAP) score, which is the standard for object detection tasks.
Deliverables:
A complete submission must include the following three components:
Prediction File: The submission.zip file containing your model's predictions on the test set.
Source Code: A link to a GitHub repository or a zipped folder containing all the code used to train your model and generate predictions. The code must be runnable for verification.
Technical Report: A short (2-3 page) PDF report detailing your methodology. This must include:
A brief overview of your model architecture.
Details of your data augmentation and training strategy.
Hyperparameter tuning experiments.
A discussion of what techniques you used to improve upon the baseline.
Rules:
Dataset Usage: Participants must use only the provided dataset for training. No external datasets are allowed.
Pre-trained Models: Pre-trained models are allowed and encouraged.
Team Size: Teams are limited to a maximum of 4 members.
Final Decision: The final ranking will be based on the mAP score calculated on the private test set. The organizers' decision is final.
For further updates and Q&A, please join the WhatsApp group.