AI PROJECT CLASS X/XII

AI Project
AI PROJECT CLASS X/XII
October 6, 2024
AI Project
AI PROJECT CLASS X/XII
October 7, 2024
AI Project

Project 2:

AI-Powered Image Recognition for Road Sign Detection

Problem Statement
With the rise of autonomous vehicles, recognizing road signs accurately and in real-time is crucial for safety and navigation. This project aims to develop an AI-based road sign detection system that can detect and classify road signs from images using machine learning techniques.

Users/Stakeholders

  • Autonomous Vehicle Manufacturers: For enhancing vehicle safety and navigation systems.
  • Traffic Management Agencies: To monitor and manage road safety.

Objectives

  • Develop a system that detects and classifies road signs in real-time.
  • Ensure high accuracy to avoid misclassifications that could lead to dangerous driving.

Features

  • Convolutional Neural Networks (CNN): For detecting and classifying road signs.
  • Real-Time Detection: Capable of processing video feeds from vehicle cameras.
  • Customizable Sensitivity: Adjust the detection threshold for different environments (e.g., night vs. day).
  • User-Friendly Interface: Visual display of detected road signs.

AI Used

  • Supervised Learning Models: CNN for image classification.
  • Transfer Learning: To enhance model performance using pre-trained networks like VGG16 or ResNet.

Dataset

  • German Traffic Sign Recognition Benchmark (GTSRB): A publicly available dataset containing thousands of labeled road sign images.

Solution
The system will be implemented using Python, leveraging CNN models to detect and classify road signs from images or video feeds. The project can also be extended to work with live video from vehicle cameras.

Import necessary libraries

import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from tensorflow.keras.preprocessing.image import ImageDataGenerator

Load the dataset (assuming it’s preprocessed and available as numpy arrays)

Replace ‘X.npy’ and ‘y.npy’ with the paths to your dataset

X = np.load(‘X.npy’) # Array of images
y = np.load(‘y.npy’) # Corresponding labels (road sign classes)

Normalize the image data

X = X / 255.0

Split the data into training and test sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

One-hot encode the labels

label_binarizer = LabelBinarizer()
y_train = label_binarizer.fit_transform(y_train)
y_test = label_binarizer.transform(y_test)

Data augmentation (optional, improves model performance)

datagen = ImageDataGenerator(
rotation_range=10,
zoom_range=0.1,
width_shift_range=0.1,
height_shift_range=0.1
)
datagen.fit(X_train)

Build the CNN model

model = Sequential([
Conv2D(32, (3, 3), activation=’relu’, input_shape=X_train.shape[1:]),
MaxPooling2D(pool_size=(2, 2)),
Conv2D(64, (3, 3), activation=’relu’),
MaxPooling2D(pool_size=(2, 2)),
Flatten(),
Dense(128, activation=’relu’),
Dense(len(label_binarizer.classes_), activation=’softmax’)
])

Compile the model

model.compile(optimizer=’adam’, loss=’categorical_crossentropy’, metrics=[‘accuracy’])

Train the model

model.fit(datagen.flow(X_train, y_train, batch_size=32), validation_data=(X_test, y_test), epochs=10)

Evaluate the model

test_loss, test_acc = model.evaluate(X_test, y_test, verbose=2)
print(f”Test accuracy: {test_acc}”)

Steps to run this code:

  1. Replace 'X.npy' and 'y.npy' with the actual image data and labels.
  2. Ensure images are preprocessed and labeled (or use a dataset like GTSRB).
  3. Install required libraries: pip install tensorflow scikit-learn numpy.
  4. Run the code to train and evaluate the CNN model on road sign images.

ai cbse
ai cbse
This site is dedicated to provide contents, notes, questions bank,blogs,articles and other materials for AI students of CBSE.

Leave a Reply

Your email address will not be published. Required fields are marked *