SASKEN Computer Vision based Fire Detection System

Many fire detection devices have been around for quite some time and they are essential safety devices.

The devices that automatically detect fire have been available since 1890 when Francis Upton invented the first smoke alarm [1][2]. With further technological advances in the mid of 1960s, smoke detectors started being used in buildings all over the world and became an essential safety device [3].

But the devices like smoke detectors have some serious limitations which make them inefficient in many important situations. For example, they may work well in small closed spaces like homes or offices but are inefficient in large open spaces like warehouses, theaters, verandas etc. They require sufficient levels of a smoke build-up to turn on. The modern electronic fire detectors usually depend upon heat and pressure sensors and they also suffer from the similar limitations as they also require a certain amount of heat and pressure built up to set them off. But this is an impractical approach in a case of open spaces – we cannot detect forest fires with such smoke and electronic detectors. This necessitated the need of an alternate system that can accept alternate mode of input and surmount the existing limitations.

This white-paper presents a computer vision based fire detection system that takes input from the video cameras and analyzes fire pixels present in it. To analyze, the system uses color and luminance features of the input images. Due to rapid developments in digital camera technology and advancements in computer-vision based methods, more and more computer-vision based fire detection systems are being introduced which do not suffer from space limitations.

Methodology

As fire is a visual phenomenon, it has many distinctive features like color, motion etc. An algorithm extracts these

Fig. 1 Flow chart of the proposed system
Fig. 1 Flow chart of the proposed system

features to classify whether the pixels under consideration are of fire or not. As fire is not a stationary process, to reduce the computational load, the first step in the algorithm is to detect the region of the image where there is motion. This is done by calculating the difference between the nth frame (i.e., the frame under consideration) of the video with (n−1)th frame and the resultant pixels are called Region of Interest (ROI) in the image. These ROI pixels are then fed to Fire Pixel Classifier (FPC) that consists of various rules. If the ROI pixels pass all these rules, it is confirmed that the pixels under consideration belong to fire. Upon detection of fire pixels, the system can raise an alarm. The above procedure can be well understood with the help of the following flow chart shown in Fig. 1:

The various components of the system are:

A. Motion detector

Fig. 2 Sample subsequent frames
Fig. 2 Sample subsequent frames

The motion detector confirms the motion in a region of the image and this is done by differencing the subsequent frames of the captured video based on their intensity values. Let nth frame of the video be the frame under

Fig. 3 Result obtained after
Fig. 3 Result obtained after

consideration. As this frame is a color image and the color image consists of 3 channels (i.e., R, G, B channels), motion is detected by differencing the

Fig. 4 Motion detector block diagram
Fig. 4 Motion detector block diagram

intensity values of R, G, B channels of nth frame with (n− 1)th frame. The resultant pixels give the ROI and further processing is done on these pixels.

Fig. 2 shows two subsequent frames of captured video and Fig. 3 shows the difference image obtained after applying differencing operation. The three channels R, B, and G in the difference image are merged and converted to gray-scale for better visualization of the result. The block diagram of motion detector is as shown in Fig. 4.

B. Fire Pixel Classifier (FPC)

Fig. 5 Flow chart of Fire Pixel
Fig. 5 Flow chart of Fire Pixel

In FPC, the rule based color model approach is followed. As fire pixels have color and luminance properties, the ROI is

processed in RGB and YCbCr color spaces. To be identified as fire pixels by FPC, the ROI pixels have to satisfy below rules, and if the rules are satisfied they are classified by FPC as fire pixels or, those set of pixels are dropped and the process repeats for new ROI. This process can be visualized with the help of the flow chart as shown in Fig. 5:

 The various rules that ROI pixels have to satisfy are:

Rule I: For fire pixels in case of RGB color space, R channel has higher intensity values than G channel, and G has higher than B. So, a pixel value at location (x, y) in the difference image is said to be fire pixel if R(x, y) >G(x, y) >B(x, y).

Fig. 6 Original RGB image on the left and image with
Fig. 6 Original RGB image on the left and image with

Rule II: Fig. 6 shows highlighted fire pixels region in the original image and Fig. 7 shows the histograms of R, G, and B channels of the highlighted fire pixels:

The logical explanation for this behavior is that image of fire is composed of yellow color with traces of red color in it. Yellow color in turn is a combination of green and red color. So, the histogram of fire image will have a number of red components in high tonal range, green components from medium to high tonal range and the blue components will lie in low tonal ranges.

Fig. 7 Histograms of R, G and B channels
Fig. 7 Histograms of R, G and B channels

Based on the above observations, a threshold Rth for R-channel, Gth for G-channel and Bth for B-channel can be set such that for a pixel at location (x, y) in difference image is said to be fire pixel if R(x, y)> Rth and G(x, y)> Gth and B(x, y)< Bth.

Images of RGB color space can be converted to YCbCr color space. For the nth frame of video, the mean values of Y, Cb and Cr channels denoted as Ymean, Cbmean, Crmean are calculated.

Rules III and IV: The intensity of fire region in Y and Cr channels is more than that of Cb channel. So, the pixel at location (x, y) in the difference image is said to be fire pixel if Y(x, y) ≥ Cb(x, y) and Cr(x, y) > Cb(x, y).

Rule V: As fire region in an image is made up of yellow and red color components, Cr of fire region pixels will be greater than mean Cr of whole image and Cb of fire region pixels will be less than Cb of whole image. Also as fire region in an image is of high intensity, Y of fire region pixels will be greater than mean Y of whole image.

Fig. 8 Original RGB image on the left and RGB to YCbCr
Fig. 8 Original RGB image on the left and RGB to YCbCr

Rule VI: Fig. 8 shows the RGB image and its YCbCr converted image and Fig. 9

Fig. 9 Histograms of fire region of YCbCr image. Histogram
Fig. 9 Histograms of fire region of YCbCr image. Histogram

shows histograms of Cb and Cr channels of the fire region in YCbCr image:

Hence, the thresholds Cbth and Crth for the channels Cb and Cr can be set such that for the pixel at location (x, y) in the difference image is said to be fire pixel if Cb(x, y)≤ Cbth and Cr(x, y)≥ Crth.

Performance evaluation

Based on above discussed methodology, the authors in [4] used classification error matrix for performance

Fig. 10 Classification Error Matrix
Fig. 10 Classification Error Matrix

evaluation. For this, they collected two sets of images. One set comprises the images that consisted of fire. The fire set consisted of 200 images, with diversity in fire color and environmental illuminations. The other set didn’t contain fire, but contained fire color regions like sun, flowers, etc.

The following condition was used to declare a fire region – if the model achieves to detect atleast 10 pixels of fire in ROI, only then it is assumed that the image contains fire region and fire alarm is raised.

Fig. 10 shows the classification error matrix obtained by the authors in [4]:

With above methodology, the authors in [4] were able to achieve classification accuracy of 92.5%.

Conclusion

The paper discussed the existing algorithms for fire detection through vision sensors based on the color-space approach. RGB and YCbCr color spaces are used by authors in [4] and [1] to obtain the classification accuracy of 92.5%.

The further advancement to the system may involve machine learning to achieve high classification accuracy and to decrease false-alarm rates.


Amandeep Singh Uppal – Sr. Engineer in Technology and Solutions, Sasken Communication Technologies Ltd., Bengaluru

 

 


 

Leave a Reply