대학원 일기

Color Attenuation Prior 코드 구현 본문

Paper/Paper codes

Color Attenuation Prior 코드 구현

대학원생(노예) 2022. 4. 2. 21:10

Color Attenuation Prior

  Color Attenuation Prior(이하, CAP)는 단일 이미지 안개제거 기법으로 2015년에 "A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior"라는 논문으로 발표되었다. CAP는 안개 영상에서의 밝기(value)와 채도(saturation)에 집중하여 안개가 밀집된 영역에서는 밝기와 채도의 차이가 크고 안개가 없는 영역에서는 밝기와 채도의 차이가 적다는 점에 주목하여 선형관계에 있다는 것을 증명하고 이를 통해 다음과 같은 식을 정립한다.

 

$d(x) = \theta_{0} + \theta_{1}v(x) + \theta_{2}s(x) + \varepsilon(x)$

 

  이번 포스팅의 코드에서는 논문에서 선형계수를 머신러닝을 구한 값을 토대로 코드를 구현하였다. 

CAP 논문 리뷰

 

[논문 리뷰] A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior(CAP)

 이 논문은 Color Attenuation Prior(이하 CAP)라는 기법으로 안개를 제거한다. CAP 기법은 안개 영상의 깊이 정보를 구해 안개제거 영상을 얻는 방법이다. 이를 위해, 안개 영상의 깊이를 모델링하기 위

kys0411.tistory.com

CAP 원문

 

A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior

Single image haze removal has been a challenging problem due to its ill-posed nature. In this paper, we propose a simple but powerful color attenuation prior for haze removal from a single input hazy image. By creating a linear model for modeling the scene

ieeexplore.ieee.org

 

CAP Code

import numpy as np
import cv2
import os
import copy
import math

'''
CAP에서 depth map을 구하는 순서
1st depth: 준식 적용
2nd depth: 국소 영역 제거, block artifacts 생김
3rd depth: Guided Filter
'''
def quantization(pixels, bins, range_): # 연속적(0~1)을 이산적(0~255)의 값으로 표현
    m = range_[0]
    interval_size = range_[1] - range_[0]
    interval_size = interval_size / bins 

    for i in range(len(pixels)): # height
        for j in range(len(pixels[i])): # width
            pixels[i][j] = ((pixels[i][j]-m)/interval_size)

    return pixels 

# Estimate Depth map
def Depthmap(v, s):
    # first depth map
    # Theta values
    theta0 = 0.121779
    theta1 = 0.959710
    theta2 = -0.780245
    sigma = 0.041337
    
    # 준식
    first_depth_map = theta0 + theta1 * v + theta2 * s + np.random.normal(0, sigma, hsv[:, :, 0].shape) 
    first = quantization(first_depth_map, 255, [first_depth_map.min(), first_depth_map.max()]).astype(np.uint8)
    # first = cv2.applyColorMap(first, cv2.COLORMAP_HOT) # color depth map
    cv2.imwrite("Color attenuation prior/output/1st_depth_map.jpg", first)
    
    # second depth map
    second_depth_map = copy.deepcopy(first_depth_map)
    width = second_depth_map.shape[1]
    height = second_depth_map.shape[0]

    n_size = 5  # Size of neighborhood(min filter)
    for i in range(height): 
        for j in range(width):
            x_low = relu(i-n_size)
            x_high = reverse_relu(height-1, i+n_size)+1
            y_low = relu(j-n_size)
            y_high = reverse_relu(width-1, j+n_size)+1
            second_depth_map[i][j] = np.min( first_depth_map[x_low:x_high, y_low:y_high] ) # min filter
                
    second = quantization(second_depth_map, 255, [second_depth_map.min(), second_depth_map.max()]).astype(np.uint8)
    # second = cv2.applyColorMap(second, cv2.COLORMAP_HOT)
    cv2.imwrite("Color attenuation prior/output/2nd_depth_map.jpg", second)
                
    # third depth map
    noise = 0
    third_depth_map = Guidedfilter(second_depth_map, first_depth_map, noise)
    third = quantization(third_depth_map, 255, [third_depth_map.min(), third_depth_map.max()]).astype(np.uint8)
    # third = cv2.applyColorMap(third, cv2.COLORMAP_HOT)
    cv2.imwrite("Color attenuation prior/output/3rd_depth_map.jpg", third)
    return third_depth_map, first, second, third

def relu(x):
    if x < 0:
        return 0
    else:
        return x

def reverse_relu(bound, x):
    if x > bound:
        return bound
    else:
        return x

def Guidedfilter(image, g_image, eps = 0): # Haze image(Guide, I), transmission(input), kernel size, strength
    blur_factor = (50, 50)
    mean_I = cv2.blur(g_image, blur_factor) # I blurring
    mean_p = cv2.blur(image, blur_factor) # p blurring
    corr_I = cv2.blur(g_image*g_image, blur_factor) # I * I blurring
    corr_Ip = cv2.blur(g_image*image, blur_factor) # I * p blurring

    var_I = corr_I - mean_I * mean_I # variance
    cov_Ip = corr_Ip - mean_I * mean_p # covariance

    a = cov_Ip / (var_I + eps) # 상관계수
    b = mean_p - a * mean_I

    mean_a = cv2.blur(a, blur_factor)
    mean_b = cv2.blur(b, blur_factor)

    q = mean_a * g_image + mean_b # linear transformation
    return q

def Recover(img, third_depth_map):
    # Estimate AtmLight
    # Automatically estimate the atmospheric lights
    atmospheric_light = AtmLight(img, third_depth_map)
    
    beta = 1
    t = np.exp(-beta * third_depth_map)  # t(x)
    tx = np.clip(t, 0.1, 0.9)

    output_image = copy.deepcopy(img).astype("float")
    
    for ind in range(0, 3): # 대기산란모델 J(x)
        output_image[:, :, ind] = (img[:, :, ind]-atmospheric_light[0, ind])/tx + atmospheric_light[0, ind]

    output_image[:, :, 0] = quantization(output_image[:, :, 0], 256, [np.min(output_image[:, :, 0]), np.max(output_image[:, :, 0])])
    output_image[:, :, 1] = quantization(output_image[:, :, 1], 256, [np.min(output_image[:, :, 1]), np.max(output_image[:, :, 1])])
    output_image[:, :, 2] = quantization(output_image[:, :, 2], 256, [np.min(output_image[:, :, 2]), np.max(output_image[:, :, 2])])

    return output_image

def AtmLight(img, depth):
    '''
    1. dark channel에서 가장 밝은 0.1%의 픽셀을 선택(Haze 농도 심한 곳)
    2. 선택한 픽셀 위치에서 input image(I) 중 명도(intensity)가 가장 높은 화소를 Atmospheric light로 선택
    -> 이를 automatically method 라고 함.
    '''
    
    [h, w] = img.shape[:2] 
    img_size = h*w  
    numpx = int(max(math.floor(img_size/1000), 1)) 
    
    # Vector data
    depthvec = depth.reshape(img_size) 
    imgvec = img.reshape(img_size, 3) 

    indices = depthvec.argsort() 
    indices = indices[img_size-numpx::] 
    
    atmsum = np.zeros([1, 3]) # r,g,b channel
    for ind in range(0, numpx): 
       atmsum = atmsum + imgvec[indices[ind]] # indices[ind] = index number, imgvec[indices[ind]] = 명도가 높은 화솟값

    A = atmsum / numpx
    return A 

path = os.path.join('./image/ex.png')
src = cv2.imread(path)

hsv = cv2.cvtColor(src, cv2.COLOR_BGR2HSV)

value = hsv[:, :, 2].astype('float')/255  # 밝기
saturation = hsv[:, :, 1].astype('float')/255  # 채도

depth, first, second, third = Depthmap(value, saturation)
J = Recover(src, depth).astype(np.uint8)

cv2.imshow("CAP: 1st depth map", first)
cv2.imshow("CAP: 2nd depth map", second)
cv2.imshow("CAP: 3rd depth map", third)
cv2.imshow("CAP: haze image", src)
cv2.imshow("CAP: Dehaze image", J)

cv2.imwrite("Color attenuation prior/output/CAP_haze.jpg", src)
cv2.imwrite("Color attenuation prior/output/CAP_Dehaze.jpg", J)
cv2.waitKey()

 

Result

first depth map
second depth map
third depth map

 

Haze image
CAP: Dehaze image

 

'Paper > Paper codes' 카테고리의 다른 글

White Channel Prior 코드 구현  (0) 2022.02.24
Dark Channel Prior 코드 구현  (0) 2022.02.23
Comments