FogHacks
Summary: 

Low visibility caused by fog presents a significant safety challenge for drivers on the road. This project presents FogHacks, an AI powered hazard detection system designed to detect road hazards in foggy environments using image detection ML models and defogging algorithms. The system integrates a Raspberry Pi 5 with a Pi Camera to capture real world images then utilizes defogging techniques to improve visibility before performing object detection. A customized YOLOv10 model was trained on road sign and foggy driving datasets to identify key road hazards such as cars, buses, bicycles, pedestrians, and traffic signs. To improve detection performance under foggy conditions, two defogging techniques, Contrast Limited Adaptive Histogram Equalization (CLAHE) and Dark Channel Prior (DCP), were implemented in the preprocessing pipeline. Testing on this custom model found improvements in accuracy as compared to the base YOLOv10 model, with some variation in results due to factors such as glare and image quality from the camera. The final prototype presents an end to end system that is able to intake real world road images and detect potential hazards through fog.

Technical Approach/Methodology: 

The purpose of this project is to develop an AI-powered system capable of determining road hazards, people, and cars under foggy conditions. It would work by first taking a picture of its environment and storing that image in the computer's local files. Following that, it would start by defogging then analyzing an image of a foggy environment, and identify objects and people that would be significant for a driver to notice. The detected hazards can then be sent to either a self driving bot or projected for a human driver in order to warn them of upcoming road conditions and dangers. The specific use cases for this system would be in areas with moderate to high levels of fog where visibility is low and it can be hard to keep track of everything on the road.

The YOLO v10 model was used as a baseline before being modified and further trained to take into account all our desired needs. The base model is effective at identifying objects of interest in dense environments, but struggled to identify road obstacles, road signs, as well as objects within fog [1]. This required significant training to be able to properly recognize road features and hazards in foggy conditions. To account for the dip in performance on foggy images, we integrated two types of defogging algorithms, Contrast Limited Adaptive Histogram Equalization (CLAHE) and Dark Channel Prior (DCP) into the preprocessing stage of the model in order to clear up the foggy images as much as possible before object detection. DCP was found to be more effective in defogging images and led to better performance, but required more compute power as compared to CLAHE. As such, CLAHE was implemented as a fallback in order to keep the Raspberry Pi from overheating, only using DCP for high performance tasks such as images with heavy fog.

The basis for our design lies in the Raspberry Pi 5 which has enough processing power to quickly take in images from the Pi camera and pass it onto the computer which is running our model. In addition to the image processing, the Pi also gives active feedback to the mist makers based on readings from the humidity sensor. This allows us to control how thick or thin we want the fog to be, ensuring the consistency of the fog generation to acquire training data.

Outcomes: 

In the end, the specifically trained YOLOv10 model was trained on vehicles, bikes, pedestrians, and signs. We established a complete hardware interface, enabling the Raspberry Pi to manage external peripherals, and installed our trained AI model to utilize these external peripherals to receive images and utilize the AI model to detect objects within the recovered image. A physical test environment that housed the Raspberry Pi 5 and Pi camera was created with a fog generation mechanism and LCD screen to display a test image. Defogging algorithms were implemented and real world image testing was done. Surprisingly, the image detection in the real world demo was found to potentially be more accurate as compared to image testing. However, it was found that real world testing tended to have significantly higher fluctuations in performance as compared to software image testing, which was much more stable.

Course Department: 
EECS
Academic Year: 
2025-2026
Term(s): 
Fall
Winter
Project Category: 
Internal (faculty, staff, TA)
Sponsor/Mentor Name: 
Fei Xia
Project Poster: