SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) PROBLEM

Get Complete Project Material File(s) Now! »

Detection and Segmentation

When we do image processing we can decide if the output after the processing will be an image again or an attribute extracted from the input image. Segmentation and detection are two major steps in this direction.
Segmentation is the case when we divide the image into some objects or regions that we are interested to analyze further. In image segmentation thresholding is the most commonly used method due to its simplicity and computational speed. The output of a thresholding process is a binary image, where black pixels describes the background and white pixels describes the foreground (objects) or vice versa.

Theoretical Background

Detection is the process when different algorithms are used to detect lines, edges, corners, feature points, point clouds. For more details about the detection algorithms see the section 2.2 of this chapter.

High Level Processing

This is the step that comes after low-level processing like feature extraction, detection and image segmentation. At this step the input data that we process is no longer an image, it is a set of points or an image region which contains a specific object interesting for the final result of processing.
Remaining part of processing deals with:
 Verification that the data satisfy model-based and application specific assumptions.
 Estimation of object pose or object size – application specific parameters.
 Image recognition – classifies a detected object into different categories like pattern recognition, face recognition, object recognition.
 Image registration – is the process that compares and combines two or more different views of the same object, example when merging different laser scans at the same coordinate systems [1], [5].

Decision Making

This is the final step of processing that is required for the application. It includes analysis and decisions that needs to be made example: match or no-match in recognition applications, need for further review from humans in medical or security recognition applications, pass or fail in automatic inspection applications.

Feature Extraction Algorithms

The feature extraction part has briefly discussed the main features that can be used to further process images in computer vision. The next step after knowing the definition of these features is how to detect them, which algorithms and methods are available nowadays. Since the field of computer vision is quite new and solution possibilities for many applications in robotics and industries are in development phase, there exist a large number of algorithms for detecting and describing these main features.
A brief description of the main existing algorithms for detecting features and a deeper overview of these algorithms that are more interesting for our later work is given in the subsections below.

READ  A regional scale-invariant extreme value model of rainfall Intensity-Duration- Area-Frequency relationships 

Edge Detection Algorithms

It is known that it is difficult to design a general edge detection algorithm which performs well in many contexts and meets all the requirements of the processing stages. Edge detection requires differentiation and smoothing of the image, smoothing can result in loss of the information while differentiation is an ill-conditioned process. Detecting changes in intensity for the purpose of finding edges can be accomplished using first or second order derivatives. The methods used for edge detection can be grouped in two main categories: search-based and zero crossing-based. The first method is a first order derivative and from his magnitude we can detect the presence of an edge at a point in the image, the zero-crossing method is a second order derivative that produces two values for every edge in the image and its zero-crossing can be used for locating the centers of thick edges.

1 INTRODUCTION.
1.2 BACKGROUND
1.3 PURPOSE AND RESEARCH QUESTIONS
1.4 DELIMITATIONS
1.5 OUTLINE
2 THEORETICAL BACKGROUND 
2.1 COMPUTER VISION
2.2 FEATURE EXTRACTION ALGORITHMS
2.3 REAL-TIME IMAGE PROCESSING
3 SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) PROBLEM ..
3.1 SLAM FROM A HISTORICAL PERSPECTIVE
3.2. 3D TOOLKIT .
3.2 ALGORITHMS WITHIN SLAM6D
3.3 THE SHOW COMPONENT OF 3DTK
4 RESEARCH METHODS 
4.1 CASE STUDY RESEARCH
4.2 TEST PLAN
5 CASE STUDY AND RESULTS .
6 CONCLUSIONS AND FUTURE WORK
7 REFERENCES

GET THE COMPLETE PROJECT
Matching Feature Points in 3D World

Related Posts