Unit 3: Making Machines See

Regression in AI
September 13, 2024
UNIT 2: Unlocking your Future in AI
NOTES (XI AI) : Introduction: Artificial Intelligence for Everyone
September 14, 2024
CBSE XII AI

Computer Vision

MCQs:

  1. The field of study that helps to develop techniques to help computers “see” is ___________.

a) Python

b) Convolution

c) Computer Vision

d) Data Analysis
Answer: c) Computer Vision

  1. Task of taking an input image and outputting/assigning a class label that best describes the image is ___________.

a) Image classification

b) Image localization

c) Image Identification

d) Image prioritization
Answer: a) Image classification

  1. Identify the incorrect option:

(i) Computer vision involves processing and analyzing digital images and videos to understand their content.

(ii) A digital image is a picture that is stored on a computer in the form of a sequence of numbers that computers can understand.

(iii) RGB colour code is used only for images taken using cameras.

(iv) Image is converted into a set of pixels and fewer pixels will resemble the original image.

a) ii

b) iii

c) iii & iv

d) ii & iv
Answer: c) iii & iv

  1. The process of capturing a digital image or video using a digital camera, a scanner, or other imaging devices is related to ___________.

a) Image Acquisition

b) Preprocessing

c) Feature Extraction

d) Detection
Answer: a) Image Acquisition

  1. Which algorithm may be used for supervised learning in computer vision?

a) KNN

b) K-means

c) K-fold

d) KEAM
Answer: a) KNN

  1. A computer sees an image as a series of ___________.

a) Colours

b) Pixels

c) Objects

d) All of the above
Answer: b) Pixels

  1. ___________ empowers computer vision systems to extract valuable insights and drive intelligent decision-making in various applications.

a) Low-level processing

b) High insights

c) High-level processing

d) None of the above
Answer: c) High-level processing

  1. In Feature Extraction, which technique identifies abrupt changes in pixel intensity and highlights object boundaries?

a) Edge detection

b) Corner detection

c) Texture Analysis

d) Boundary detection
Answer: a) Edge detection

  1. Choose the incorrect statement related to preprocessing stage of computer vision:

a) It enhances the quality of acquired image

b) Noise reduction and Image normalization are often employed with images

c) Techniques like histogram equalization can be applied to adjust the distribution of pixel intensities

d) Edge detection and corner detection are ensured in images
Answer: d) Edge detection and corner detection are ensured in images

  1. 1 byte = __________ bits

a) 10

b) 8

c) 2

d) 1
Answer: b) 8

  1. What does Computer Vision primarily focus on?

a) Understanding digital images

b) Storing digital images

c) Designing digital images

d) Scanning documents
Answer: a) Understanding digital images

  1. Which of the following is NOT an application of computer vision?

a) Object detection

b) Facial recognition

c) Data encryption

d) Image classification
Answer: c) Data encryption

  1. What type of images does computer vision process?

a) Color images only

b) Grayscale images only

c) Digital images, including color and grayscale

d) Analog images
Answer: c) Digital images, including color and grayscale

  1. In image preprocessing, which step reduces the noise or blurriness from an image?

a) Image normalization

b) Noise reduction

c) Resizing

d) Histogram equalization
Answer: b) Noise reduction

  1. What is used to represent colors in digital images?

a) Pixels

b) RGB color model

c) Vectors

d) Algorithms
Answer: b) RGB color model

  1. Which type of machine learning algorithm is commonly used for classification in computer vision?

a) Unsupervised learning

b) Supervised learning

c) Reinforcement learning

d) Semi-supervised learning
Answer: b) Supervised learning

  1. What does edge detection help to identify in an image?

a) Boundaries between different regions

b) Color distribution

c) Textures

d) Shapes
Answer: a) Boundaries between different regions

  1. The technique that groups similar pixels together based on characteristics is known as:

a) Clustering

b) Image segmentation

c) Feature extraction

d) Color analysis
Answer: b) Image segmentation

  1. Which of the following techniques is used for real-time object detection in computer vision?

a) YOLO

b) CNN

c) Edge detection

d) K-means
Answer: a) YOLO

  1. What is the main function of object localization in computer vision?

a) To classify objects

b) To determine the exact position of objects in an image

c) To enhance the color features of an object

d) To create masks for object separation
Answer: b) To determine the exact position of objects in an image

  1. What term refers to the process of improving the quality of an image to make it suitable for analysis?

a) Preprocessing

b) Detection

c) Segmentation

d) Postprocessing
Answer: a) Preprocessing

  1. Which of the following is a challenge of computer vision?

a) Reasoning and analytical issues

b) High processing speeds

c) Privacy and security concerns

d) All of the above
Answer: d) All of the above

  1. Which stage involves analyzing an image to extract important features?

a) Image acquisition

b) Preprocessing

c) Feature extraction

d) High-level processing
Answer: c) Feature extraction

  1. What does the term “semantic segmentation” refer to?

a) Classifying individual objects in an image

b) Classifying pixels into predefined categories without distinguishing between instances

c) Identifying boundaries of objects

d) Creating masks for image regions
Answer: b) Classifying pixels into predefined categories without distinguishing between instances

  1. In which of the following fields is computer vision commonly applied?

a) Medical imaging

b) Automotive (autonomous driving)

c) Surveillance

d) All of the above
Answer: d) All of the above


·

Questions-Answers

  What is Computer Vision, and why is it important?

Answer: Computer Vision (CV) is a field of artificial intelligence (AI) that enables machines to interpret and understand visual information from the world, much like human vision. It involves processing and analyzing images and videos to extract meaningful information. CV is important because it allows machines to perform tasks like object recognition, facial identification, medical image analysis, and autonomous navigation, making it essential for applications across healthcare, automotive, security, and entertainment.

·  What are the key stages in the computer vision process?

Answer: The computer vision process generally involves five stages:

  • Image Acquisition: Capturing digital images or videos through various devices like cameras or scanners.
  • Preprocessing: Enhancing image quality by reducing noise, normalizing pixel values, resizing, or adjusting contrast.
  • Feature Extraction: Identifying key features such as edges, textures, or color patterns within the image.
  • Detection/Segmentation: Identifying and isolating objects or regions of interest within the image.
  • High-Level Processing: Analyzing the detected objects or regions to make informed decisions or predictions.

·  Explain the difference between image classification and object detection.

Answer: Image classification involves categorizing an entire image into a specific class or category (e.g., identifying an image as containing a “dog”). In contrast, object detection identifies and locates multiple objects within an image by drawing bounding boxes around them and classifying each object (e.g., detecting both a “dog” and a “cat” within an image and marking their locations).

·  What is feature extraction in computer vision, and why is it important?

Answer: Feature extraction is the process of identifying and extracting important visual patterns or attributes from an image, such as edges, textures, or colors. It is important because it reduces the complexity of the image and helps computer vision algorithms focus on the most relevant information for tasks like recognition, classification, and tracking.

·  What are the applications of computer vision in the healthcare industry?

Answer: In healthcare, computer vision is used for medical image analysis, such as detecting tumors or abnormalities in X-rays, MRIs, or CT scans. It aids in diagnosing diseases, tracking the progression of conditions, and providing visual support for surgery. Computer vision is also employed in robotic surgery, where it helps surgeons with precision and real-time feedback.

·  What challenges do computer vision systems face in real-time applications?

Answer: Computer vision systems face several challenges in real-time applications, including:

  • Data Quality: Poor quality images, such as those captured in low-light conditions, can lead to inaccurate results.
  • Interpretability: The decision-making process of deep learning models is often a “black box,” making it difficult to understand how the system reaches a conclusion.
  • Speed: Balancing the need for real-time processing with the accuracy of object recognition or classification is challenging.
  • Privacy Concerns: The use of technologies like facial recognition raises ethical and privacy issues.

·  What are some common preprocessing techniques used in computer vision?

Answer: Common preprocessing techniques include:

  • Noise Reduction: Removing blurriness, graininess, or distortions from images to improve clarity.
  • Image Normalization: Adjusting pixel values to fall within a specific range (e.g., 0 to 1) for consistency across images.
  • Resizing/Cropping: Changing the dimensions of an image to make it uniform for analysis.
  • Histogram Equalization: Adjusting image contrast to highlight important details, especially in low-contrast images.

·  What role does convolutional neural networks (CNNs) play in computer vision?

Answer: Convolutional Neural Networks (CNNs) are a type of deep learning algorithm widely used in computer vision. They automatically learn and extract features from images by applying convolutional layers, pooling layers, and fully connected layers. CNNs are particularly effective in tasks like image classification, object detection, and segmentation, as they can identify complex patterns in visual data without manual feature engineering.

·  How does image segmentation work in computer vision?

Answer: Image segmentation involves dividing an image into distinct regions based on shared characteristics, such as color, texture, or intensity. The process can be either semantic, where pixels belonging to the same class are grouped together, or instance-based, where individual objects are differentiated, even if they belong to the same class. Segmentation helps identify and isolate specific objects or areas in an image for further analysis.

·  What are the ethical concerns associated with computer vision technology?

Answer: Ethical concerns include privacy issues, particularly with facial recognition technology being used in surveillance systems without consent. There is also the risk of algorithmic bias, where certain groups may be misidentified or discriminated against due to biased training data. Additionally, computer vision can be misused for generating fake images or videos, leading to misinformation or malicious activities.

·  Explain the concept of “high-level processing” in computer vision.

Answer: High-level processing refers to advanced stages in the computer vision pipeline where the detected objects or regions are analyzed to extract meaningful insights or make decisions. This could involve understanding the context of a scene (e.g., recognizing that a person is in a car) or making predictions (e.g., identifying a medical condition from an X-ray). High-level processing enables computer vision systems to perform tasks like object recognition, scene understanding, and autonomous decision-making.

·  What is the difference between semantic segmentation and instance segmentation?

Answer: Semantic segmentation classifies each pixel of an image into a predefined category, but it does not differentiate between multiple instances of the same class (e.g., all “cars” are labeled the same). Instance segmentation, on the other hand, not only classifies each pixel but also distinguishes between different instances of the same class, allowing the model to identify and separate each object individually, even if they belong to the same category.

·  What is the significance of RGB in computer vision?

Answer: RGB (Red, Green, Blue) is the most common color model used in computer vision to represent color images. Each pixel in an image is made up of a combination of red, green, and blue values, with each channel ranging from 0 to 255. The RGB model is used to encode the colors in a digital image, allowing computer vision systems to analyze and process color information effectively.

·  How does object detection work in computer vision?

Answer: Object detection involves identifying and locating objects within an image. The process typically includes two main tasks: classification, where each object is categorized into a class (e.g., “dog” or “cat”), and localization, where bounding boxes are drawn around each detected object. Object detection algorithms, such as YOLO or R-CNN, utilize deep learning techniques to recognize multiple objects in a single image and label them accordingly.

·  What is the role of deep learning in improving computer vision tasks?

Answer: Deep learning, particularly through neural networks like CNNs, has significantly improved computer vision tasks by enabling systems to automatically learn features from raw image data. Unlike traditional computer vision methods that require manual feature extraction, deep learning models can learn complex patterns and representations directly from data, leading to higher accuracy and the ability to handle more challenging tasks, such as facial recognition and autonomous driving.

·  How does the “black box” nature of deep learning models affect computer vision?

Answer: The “black box” nature refers to the difficulty in interpreting how deep learning models, especially CNNs, make decisions. While these models perform exceptionally well in tasks like image recognition, understanding the reasoning behind their predictions can be challenging. This lack of transparency is problematic, especially in applications like healthcare or security, where understanding the rationale for a decision is critical for trust and accountability.

·  What is edge detection, and why is it important in computer vision?

Answer: Edge detection is a technique used in computer vision to identify boundaries within an image, where there is a significant change in pixel intensity. It helps highlight the outlines of objects and regions in an image, making it easier to understand the structure of the visual content. Edge detection is crucial for tasks like object recognition, scene understanding, and image segmentation, as it provides important cues about the shape and position of objects.

·  What are the potential future advancements in computer vision technology?

Answer: Future advancements in computer vision are expected to include improvements in model accuracy, enabling more complex scene understanding and object recognition. Real-time processing capabilities will improve, allowing faster decision-making for applications like autonomous vehicles. Additionally, computer vision will become more integrated with AI, enabling systems to understand not just images but also context, emotions, and actions, enhancing its use in fields like robotics, healthcare, and entertainment.

·  How can computer vision be used in autonomous vehicles?

Answer: In autonomous vehicles, computer vision is used to interpret the vehicle’s surroundings through cameras and sensors. It helps detect and recognize objects such as pedestrians, other vehicles, traffic signs, and road markings. This enables the vehicle to make decisions like stopping at a red light, avoiding obstacles, or navigating complex traffic scenarios, all crucial for safe and efficient autonomous driving.

·  What are the challenges of using computer vision in real-world scenarios?

Answer: Challenges include:

  • Data Quality: Poor-quality images due to lighting, resolution, or angles can affect the accuracy of computer vision systems.
  • Real-Time Processing: Processing large amounts of data quickly enough for real-time applications is difficult, especially in systems that need instant responses.
  • Complex Environments: Handling complex or dynamic scenes, such as crowded areas or changing weather conditions, adds to the complexity.
  • Ethical and Privacy Concerns: The use of technologies like facial recognition raises concerns about surveillance, consent, and data misuse.
ai cbse
ai cbse
This site is dedicated to provide contents, notes, questions bank,blogs,articles and other materials for AI students of CBSE.

Leave a Reply

Your email address will not be published. Required fields are marked *