泊松融合

泊松融合是一种很重要的图像融合算法,该算法选自论文([Poisson Image Editing]2003年发表),关于算法中的很多基础概念可以参考如下博客:
https://blog.csdn.net/hjimce/article/details/45716603
https://blog.csdn.net/zxpddfg/article/details/75825965
http://eric-yuan.me/poisson-blending/
还有另一个博客,也是自己实现了一遍泊松融合算法,网址如下:
https://blog.csdn.net/dengheCSDN/article/details/77862567
关于泊松融合的算法,在opencv3.0以上的版本已经有接口。下面的代码没有用opencv内置的接口,是自己完整的实现了一遍该算法,对于我们理解整个算法有很大的帮助。

源图片src.jpg
背景图片dst.jpg
掩膜图片mask.jpg
首先理解什么是图像融合,图像融合是把src.jpg+mask.jpg重叠下可以显示的部分(ROI)融合到dst.jpg中,融合到dst.jpg中时,我们需要为ROI设计在dst.jpg的坐标,我们设置为dst.jpg的图像中心。也就是说,src.jpg+mask.jpg重叠的部分我们要融合到dst.jgp的中心位置。

算法计算步骤:
  1. 得到ROI图片,把ROI图片重新命名为src.jpg
  2. 计算src.jpg的每一个像素点的梯度值,同时也计算dst.jpg的每一个像素点的梯度值,对两者进行比较,把梯度更高的值存储在矩阵b中,这样可以让融合后的图像边缘更平滑。
    (关于图像梯度的解释:https://blog.csdn.net/qq_19764963/article/details/44342389)
  3. 构建稀疏矩阵A,矩阵A每行有5个非零元素,五个元素呈现这个样子(..1..1..-4..1..1..),其中每个元素分别对应A的该行对应的像素点的四个相邻像素和本身像素。类似求卷积。
  4. 最后solve()函数中,利用高斯赛德尔方法,计算得出融合后的图像x,
  5. 把x中每个像素点分别替换到dst.jpg的相应位置中,即可得到融合后的结果。**

我们要理解图像中矩阵相乘的概念,
稀疏矩阵A的行数和列数都等于 (ROI.rows*ROI.cols),也就是说如果要融合的区域是100*100像素的,矩阵A的行为10000,列也为10000
整个公式为 A*x = b,其中x为最终融合的图像;b为我们对每个像素点计算出的散度,也就是卷积的结果,b也是一个矩阵,b的行数为10000,列数为1;我们也要把ROI区域的图像变为10000*1的矩阵,也就是x,如果ROI区域是彩色的,我们就要分别计算R,G,B对应的三个矩阵;如果是灰色的,我们就只需要计算一个矩阵,最后把x贴到dst.jpg中就完成融合。
该公式大体上为,我们对每一个像素计算出该位置的散度b,然后手动的构建稀疏矩阵A,最后反向计算出图像x,x即为最终结果。

/*
代码选自:http://blog.csdn.net/zxpddfg/article/details/75825965
CSDN博客 泊松图像融合算法C++实现
opencv版本:2.4.9代码最重要的是
1.建立稀疏矩阵类class SparseMat
2.计算融合图像时用到的稀疏矩阵SparseMat A、融合后的图像散度矩阵 b
3.依据A,b得到融合后的图像x
所以最重要的两个函数为
void getEquation()
void solve()算法流程:
1.计算稀疏矩阵A
2.计算散度矩阵b
3.初始化融合图像x,初始化只是简单的把源图像复制到目的图像的ROI区域
4.依据A,b得到最终的x
*/
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <cmath>
#include <cstdio>
#include <cstdlib>
#include <cstring>
#include <vector>
#include <iostream>
using namespace std;struct IndexedValue
{IndexedValue() : index(-1), value(0) {}IndexedValue(int index_, double value_) : index(index_), value(value_) {}int index;double value;
};struct SparseMat
{SparseMat() : rows(0), maxCols(0) {}SparseMat(int rows_, int cols_) :rows(0), maxCols(0){create(rows_, cols_);}void create(int rows_, int cols_){CV_Assert(rows_ > 0 && cols_ > 0);rows = rows_;maxCols = cols_;buf.resize(rows * maxCols);data = &buf[0];memset(data, -1, rows * maxCols * sizeof(IndexedValue));count.resize(rows);memset(&count[0], 0, rows * sizeof(int));}void release(){rows = 0;maxCols = 0;buf.clear();count.clear();data = 0;}/*获取第row行的行指针*/const IndexedValue* rowPtr(int row) const{CV_Assert(row >= 0 && row < rows);return data + row * maxCols;}IndexedValue* rowPtr(int row){CV_Assert(row >= 0 && row < rows);return data + row * maxCols;}/*可以抽象的理解为在(row,col)位置插入value值,但是存储该值的时候,是从前往后依次存储的,并且存储了该值的索引*/void insert(int row, int col, double value){CV_Assert(row >= 0 && row < rows);int currCount = count[row];CV_Assert(currCount < maxCols);IndexedValue* rowData = rowPtr(row);int i = 0;if ((currCount > 0) && (col > rowData[0].index)){for (i = 1; i < currCount; i++){if ((col > rowData[i - 1].index) &&(col < rowData[i].index))break;}}if (i < currCount){for (int j = currCount; j >= i; j--)rowData[j + 1] = rowData[j];}rowData[i] = IndexedValue(col, value);++count[row];}/*可以得到对角线元素(i,i)处于第几个位置,用vector<int>pos记录,pos[i]=0,1,2,三个数字之一*/void calcDiagonalElementsPositions(std::vector<int>& pos) const{pos.resize(rows, -1);for (int i = 0; i < rows; i++){const IndexedValue* ptrRow = rowPtr(i);for (int j = 0; j < count[i]; j++){if (ptrRow[j].index == i){pos[i] = j;break;}}}}int rows, maxCols;std::vector<IndexedValue> buf;std::vector<int> count; /*记录row行插入的值的个数*/IndexedValue* data;private:SparseMat(const SparseMat&);SparseMat& operator=(const SparseMat&);
};/*
得到融合后的图像,即图像重建算法函数
x
*/
void solve(const IndexedValue* A, const int* length, const int* diagPos,const double* b, double* x, int rows, int cols, int maxIters, double eps)
{/*rows为要改变的像素点的个数 676cols = 8maxIters = 10000eps = 0.01求解矩阵x的算法:Gauss-Sidel*/for (int iter = 0; iter < maxIters; iter++){int count = 0;for (int i = 0; i < rows; i++){double val = 0;const IndexedValue* ptrRow = A + cols * i;for (int j = 0; j < diagPos[i]; j++){val += ptrRow[j].value * x[ptrRow[j].index];}for (int j = diagPos[i] + 1; j < length[i]; j++){val += ptrRow[j].value * x[ptrRow[j].index];}val = b[i] - val;val /= ptrRow[diagPos[i]].value;if (fabs(val - x[i]) < eps)count++;x[i] = val;}if (count == rows){printf("converge iter count = %d, end\n", iter + 1);break;}}
}void makeIndex(const cv::Mat& mask, cv::Mat& index, int& numElems)
{CV_Assert(mask.data && mask.type() == CV_8UC1);int rows = mask.rows, cols = mask.cols;index.create(rows, cols, CV_32SC1);index.setTo(-1);int count = 0;for (int i = 0; i < rows; i++){const unsigned char* ptrMaskRow = mask.ptr<unsigned char>(i);int* ptrIndexRow = index.ptr<int>(i);for (int j = 0; j < cols; j++){if (ptrMaskRow[j])ptrIndexRow[j] = (count++);}}numElems = count;  // 记录mask图片中,可改变的像素点的个数
}void draw(const std::vector<cv::Point>& contour, const cv::Size& imageSize,cv::Rect& extendRect, cv::Mat& mask)
{cv::Rect contourRect = cv::boundingRect(contour);int left, right, top, bottom;left = contourRect.x;right = contourRect.x + contourRect.width;top = contourRect.y;bottom = contourRect.y + contourRect.height;CV_Assert(left > 0);left--;CV_Assert(right < imageSize.width);right++;CV_Assert(top > 0);top--;CV_Assert(bottom < imageSize.height);bottom++;extendRect.x = left;extendRect.y = top;extendRect.width = right - left;extendRect.height = bottom - top;mask.create(extendRect.height, extendRect.width, CV_8UC1);mask.setTo(0);std::vector<std::vector<cv::Point> > contours(1);contours[0] = contour;cv::drawContours(mask, contours, -1, cv::Scalar(255), -1, 8,cv::noArray(), 0, cv::Point(-left, -top));
}void draw(const std::vector<cv::Point>& contour, cv::Size& size, cv::Mat& mask)
{mask.create(size, CV_8UC1);mask.setTo(0);std::vector<std::vector<cv::Point> > contours(1);contours[0] = contour;cv::drawContours(mask, contours, -1, cv::Scalar(255), -1, 8, cv::noArray(), 0);
}/*
得到
A 稀疏矩阵
b 融合然后图像的散度
*/
void getEquation(const cv::Mat& src, const cv::Mat& dst,const cv::Mat& mask, const cv::Mat& index, int count,SparseMat& A, cv::Mat& b, cv::Mat& x, bool mixGrad = false)
{CV_Assert(src.data && dst.data && mask.data && index.data);CV_Assert((src.type() == CV_8UC1) && (dst.type() == CV_8UC1) &&(mask.type() == CV_8UC1) && (index.type() == CV_32SC1));CV_Assert((src.size() == dst.size()) && (src.size() == mask.size()) &&(src.size() == index.size()));int rows = src.rows, cols = src.cols;A.create(count, 8);b.create(count, 1, CV_64FC1);b.setTo(0);x.create(count, 1, CV_64FC1);x.setTo(0);for (int i = 0; i < rows; i++){for (int j = 0; j < cols; j++){if (mask.at<unsigned char>(i, j)){int currIndex = index.at<int>(i, j);int currSrcVal = src.at<unsigned char>(i, j);int currDstVal = dst.at<unsigned char>(i, j);int neighborCount = 0;int bVal = 0;if (i > 0){neighborCount++;if (mask.at<unsigned char>(i - 1, j)){int topIndex = index.at<int>(i - 1, j);A.insert(currIndex, topIndex, -1);}else{bVal += dst.at<unsigned char>(i - 1, j);}if (mixGrad){int srcGrad = currSrcVal - src.at<unsigned char>(i - 1, j);int dstGrad = currDstVal - dst.at<unsigned char>(i - 1, j);bVal += (abs(srcGrad) > abs(dstGrad) ? srcGrad : dstGrad);}elsebVal += (currSrcVal - src.at<unsigned char>(i - 1, j));}if (i < rows - 1){neighborCount++;if (mask.at<unsigned char>(i + 1, j)){int bottomIndex = index.at<int>(i + 1, j);A.insert(currIndex, bottomIndex, -1);}else{bVal += dst.at<unsigned char>(i + 1, j);}if (mixGrad){int srcGrad = currSrcVal - src.at<unsigned char>(i + 1, j);int dstGrad =  currDstVal - dst.at<unsigned char>(i + 1, j);bVal += (abs(srcGrad) > abs(dstGrad) ? srcGrad : dstGrad);}elsebVal += (currSrcVal - src.at<unsigned char>(i + 1, j));}if (j > 0){neighborCount++;if (mask.at<unsigned char>(i, j - 1)){int leftIndex = index.at<int>(i, j - 1);A.insert(currIndex, leftIndex, -1);}else{bVal += dst.at<unsigned char>(i, j - 1);}if (mixGrad){int srcGrad = currSrcVal - src.at<unsigned char>(i, j - 1);int dstGrad = currDstVal - dst.at<unsigned char>(i, j - 1);bVal += (abs(srcGrad) > abs(dstGrad) ? srcGrad : dstGrad);}elsebVal += (currSrcVal - src.at<unsigned char>(i, j - 1));}if (j < cols - 1){neighborCount++;if (mask.at<unsigned char>(i, j + 1)){int rightIndex = index.at<int>(i, j + 1);A.insert(currIndex, rightIndex, -1);}else{bVal += dst.at<unsigned char>(i, j + 1);}if (mixGrad){int srcGrad = currSrcVal - src.at<unsigned char>(i, j + 1);int dstGrad =  currDstVal - dst.at<unsigned char>(i, j + 1);bVal += (abs(srcGrad) > abs(dstGrad) ? srcGrad : dstGrad);}elsebVal += (currSrcVal - src.at<unsigned char>(i, j + 1));}A.insert(currIndex, currIndex, neighborCount);b.at<double>(currIndex) = bVal;x.at<double>(currIndex) = currSrcVal;//x.at<double>(currIndex) = dst.at<unsigned char>(i, j);}}}}/*
该函数把val的像素值复制到dst中
*/
void copy(const cv::Mat& val, const cv::Mat& mask, const cv::Mat& index, cv::Mat& dst)
{CV_Assert(val.data && val.type() == CV_64FC1);CV_Assert(mask.data && index.data && dst.data);CV_Assert((mask.type() == CV_8UC1) && (index.type() == CV_32SC1) && (dst.type() == CV_8UC1));CV_Assert((mask.size() == index.size()) && (mask.size() == dst.size()));int rows = mask.rows, cols = mask.cols;for (int i = 0; i < rows; i++){const unsigned char* ptrMaskRow = mask.ptr<unsigned char>(i);const int* ptrIndexRow = index.ptr<int>(i);unsigned char* ptrDstRow = dst.ptr<unsigned char>(i);for (int j = 0; j < cols; j++){if (ptrMaskRow[j]){ptrDstRow[j] = cv::saturate_cast<unsigned char>(val.at<double>(ptrIndexRow[j]));}}}
}//得到一个图片的非零边界
cv::Rect getNonZeroBoundingRectExtendOnePixel(const cv::Mat& mask)
{CV_Assert(mask.data && mask.type() == CV_8UC1);int rows = mask.rows, cols = mask.cols;int top = rows, bottom = -1, left = cols, right = -1;for (int i = 0; i < rows; i++){if (cv::countNonZero(mask.row(i))){top = i;break;}}for (int i = rows - 1; i >= 0; i--){if (cv::countNonZero(mask.row(i))){bottom = i;break;}}for (int i = 0; i < cols; i++){if (cv::countNonZero(mask.col(i))){left = i;break;}}for (int i = cols - 1; i >= 0; i--){if (cv::countNonZero(mask.col(i))){right = i;break;}}CV_Assert(top > 0 && top < rows - 1 &&bottom > 0 && bottom < rows - 1 &&left > 0 && left < cols - 1 &&right > 0 && right < cols - 1);return cv::Rect(left - 1, top - 1, right - left + 3, bottom - top + 3);
}/*!
The basic Poisson image editing function.
Source image, mask image and destination image should have the same size.
Source image's content inside the mask's non zero region will be blended into
the destination image, using Possion image editing algorithm.
\param[in] src      Source image, should be of type CV_8UC1 or CV_8UC3.
\param[in] mask     Source image's mask. Source image's content inside the mask's
non zero region will be blended into the destination image.
The mask's non zero region should not include the boundaries,
i.e., left most and right most columns and top most and
bottom most columns, otherwise the result may be incorrect.
\param[in,out] dst  Destination image, should be the same cv::Mat::type() as
the source image.
\param[in] mixGrad  True to apply mixing gradient operation.
*/
void PoissonImageEdit(const cv::Mat& src, const cv::Mat& mask, cv::Mat& dst, bool mixGrad = false)
{CV_Assert(src.data && mask.data && dst.data);CV_Assert(src.size() == mask.size() && mask.size() == dst.size());CV_Assert(src.type() == CV_8UC1 || src.type() == CV_8UC3);CV_Assert(dst.type() == src.type());CV_Assert(mask.type() == CV_8UC1);cv::Mat index;SparseMat A; // 稀疏矩阵,每行五个非零元素cv::Mat b, x;int numElems; // 将要改变的像素点的个数 makeIndex(mask, index, numElems);if (src.type() == CV_8UC1)  // 灰色图{getEquation(src, dst, mask, index, numElems, A, b, x, mixGrad);std::vector<int> diagPos;A.calcDiagonalElementsPositions(diagPos);solve(A.data, &A.count[0], &diagPos[0], (double*)b.data, (double*)x.data,A.rows, A.maxCols, 10000, 0.01);copy(x, mask, index, dst);}else if (src.type() == CV_8UC3)  // 彩色图{cv::Mat srcROISplit[3], dstROISplit[3];for (int i = 0; i < 3; i++){srcROISplit[i].create(src.size(), CV_8UC1);dstROISplit[i].create(dst.size(), CV_8UC1);}cv::split(src, srcROISplit);cv::split(dst, dstROISplit);for (int i = 0; i < 3; i++){getEquation(srcROISplit[i], dstROISplit[i], mask, index, numElems,A, b, x, mixGrad);std::vector<int> diagPos;A.calcDiagonalElementsPositions(diagPos);solve(A.data, &A.count[0], &diagPos[0], (double*)b.data, (double*)x.data,A.rows, A.maxCols, 10000, 0.01);copy(x, mask, index, dstROISplit[i]);}cv::merge(dstROISplit, 3, dst);}
}/*!
Overloaded Poisson image editing function.
Source image and destination image do not need to have the same size.
Source image's content inside the contour will be blended into
the destination image, with some magnitude of shifting, using Possion image editing algorithm.
\param[in] src          Source image, should be of type CV_8UC1 or CV_8UC3.
\param[in] srcContour   A contour indicating the region of interest in the
source image. Pixels inside the region will be blended to
the destination image. The region enclosed by the contour
should not contain pixels on the border of the source image.
\param[in] ofsSrcToDst  The offset of the source image's region of interest in
the destination image. Pixel (x, y) in the region of interest in
the source image will be blended in (x, y) + ofsSrcToDst
in the destination image. You should make sure
that the destination image's region of interest locates totally inside
the destionation image excluding the border pixels.
\param[in,out] dst      Destination image, should be the same cv::Mat::type() as
the source image.
\param[in] mixGrad      True to apply mixing gradient operation.
*/
void PoissonImageEdit(const cv::Mat& src, const std::vector<cv::Point>& srcContour,cv::Point ofsSrcToDst, cv::Mat& dst, bool mixGrad = false)
{CV_Assert(src.data && (src.type() == CV_8UC1 || src.type() == CV_8UC3));CV_Assert(srcContour.size() >= 3);CV_Assert(dst.data && dst.type() == src.type());cv::Mat mask;cv::Rect srcRect;draw(srcContour, src.size(), srcRect, mask);cv::Mat srcROI = src(srcRect);cv::Mat dstROI = dst(srcRect + ofsSrcToDst);PoissonImageEdit(srcROI, mask, dstROI, mixGrad);return;
}/*!
Overloaded Poisson image editing function.
Source image and source mask imageshould have the same size.
Source image's content inside the mask's non zero region will be blended into
the destination image, with some magnitude of shifting, using Possion image editing algorithm.
\param[in] src          Source image, should be of type CV_8UC1 or CV_8UC3.
\param[in] srcMask      Source image's mask indicating the region of interest in the
source image. Pixels inside the region will be blended to
the destination image. The region enclosed by the contour
should not contain pixels on the border of the source image.
\param[in] ofsSrcToDst  The offset of the source image's region of interest in
the destination image. Pixel (x, y) in the region of interest in
the source image will be blended in (x, y) + ofsSrcToDst
in the destination image. You should make sure
that the destination image's region of interest locates totally inside
the destionation image excluding the border pixels.
\param[in,out] dst      Destination image, should be the same cv::Mat::type() as
the source image.
\param[in] mixGrad      True to apply mixing gradient operation.
*/
void PoissonImageEdit(const cv::Mat& src, const cv::Mat& srcMask,cv::Point ofsSrcToDst, cv::Mat& dst, bool mixGrad = false)
{CV_Assert(src.data && (src.type() == CV_8UC1 || src.type() == CV_8UC3));CV_Assert(srcMask.data && srcMask.type() == CV_8UC1);CV_Assert(dst.data && dst.type() == src.type());cv::Rect srcRect = getNonZeroBoundingRectExtendOnePixel(srcMask);cv::Mat mask = srcMask(srcRect);cv::Mat srcROI = src(srcRect);//cv::Mat dstROI = dst(srcRect + ofsSrcToDst);cv::Mat dstROI = dst(cv::Rect(0, 0, srcRect.width, srcRect.height) + cv::Point(dst.cols / 2 - srcRect.width / 2, dst.rows / 2 - srcRect.height / 2));PoissonImageEdit(srcROI, mask, dstROI, mixGrad);return;
}#define GRAY 0
#define ORDER 6 //测试代码
int main()
{
#if ORDER == 0{// Images from http://www.ctralie.com/Teaching/PoissonImageEditing/cv::Mat src = cv::imread("GreatWhiteShark.jpg");cv::Mat dst = cv::imread("beach.jpg");
#if GRAYcv::cvtColor(src, src, cv::COLOR_BGR2GRAY);cv::cvtColor(dst, dst, cv::COLOR_BGR2GRAY);
#endifstd::vector<cv::Point> contour(4);contour[0] = cv::Point(380, 300) - cv::Point(320, 230);contour[1] = cv::Point(550, 300) - cv::Point(320, 230);contour[2] = cv::Point(550, 420) - cv::Point(320, 230);contour[3] = cv::Point(380, 420) - cv::Point(320, 230);cv::Point ofsSrcToDst = cv::Point(320, 230);PoissonImageEdit(src, contour, ofsSrcToDst, dst, true);cv::imshow("src", src);cv::imshow("dst", dst);cv::imwrite("result0.jpg", dst);cv::waitKey(0);}
#endif#if ORDER == 1{// Eye photo and hand photo from // https://en.wikipedia.org/wiki/Gradient-domain_image_processingcv::Mat src = cv::imread("220px-EyePhoto.jpg");cv::Mat dst = cv::imread("1074px-HandPhoto.jpg");
#if GRAYcv::cvtColor(src, src, cv::COLOR_BGR2GRAY);cv::cvtColor(dst, dst, cv::COLOR_BGR2GRAY);
#endifstd::vector<cv::Point> srcContour(4);srcContour[0] = cv::Point(1, 1);srcContour[1] = cv::Point(218, 1);srcContour[2] = cv::Point(218, 130);srcContour[3] = cv::Point(1, 130);cv::Point ofsSrcToDst(570, 300);PoissonImageEdit(src, srcContour, ofsSrcToDst, dst, true);cv::imshow("src", src);cv::imshow("dst", dst);cv::imwrite("result1.jpg", dst);cv::waitKey(0);}
#endif#if ORDER == 2{// Eye photo from // https://en.wikipedia.org/wiki/Gradient-domain_image_processing// Tree photo from // https://commons.wikimedia.org/wiki/File:Big_Tree_with_Red_Sky_in_the_Winter_Night.jpg?uselang=zh-cncv::Mat src = cv::imread("220px-EyePhoto.jpg");cv::Mat dst = cv::imread("1024px-Big_Tree_with_Red_Sky_in_the_Winter_Night.jpg");
#if GRAYcv::cvtColor(src, src, cv::COLOR_BGR2GRAY);cv::cvtColor(dst, dst, cv::COLOR_BGR2GRAY);
#endifstd::vector<cv::Point> srcContour(4);srcContour[0] = cv::Point(1, 1);srcContour[1] = cv::Point(218, 1);srcContour[2] = cv::Point(218, 130);srcContour[3] = cv::Point(1, 130);cv::Point ofsSrcToDst(570, 300);PoissonImageEdit(src, srcContour, ofsSrcToDst, dst, true);cv::imshow("src", src);cv::imshow("dst", dst);cv::imwrite("result2.jpg", dst);cv::waitKey(0);}
#endif#if ORDER == 3// following image sources//  http://cs.brown.edu/courses/csci1950-g/results/proj2/pdoran/{cv::Mat src = cv::imread("src_img01.jpg");cv::Mat srcMask = cv::imread("mask_img01.jpg", cv::IMREAD_GRAYSCALE);cv::Mat dst = cv::imread("tar_img01.jpg");
#if GRAYcv::cvtColor(src, src, cv::COLOR_BGR2GRAY);cv::cvtColor(dst, dst, cv::COLOR_BGR2GRAY);
#endifcv::Point ofsSrcToDst(200, 200);cv::threshold(srcMask, srcMask, 128, 255, cv::THRESH_BINARY);PoissonImageEdit(src, srcMask, ofsSrcToDst, dst, true);cv::imshow("src", src);cv::imshow("dst", dst);cv::imwrite("result3.jpg", dst);cv::waitKey(0);}
#endif#if ORDER == 4{cv::Mat src = cv::imread("src_img02.jpg");cv::Mat srcMask = cv::imread("mask_img02.jpg", cv::IMREAD_GRAYSCALE);cv::Mat dst = cv::imread("tar_img02.jpg");
#if GRAYcv::cvtColor(src, src, cv::COLOR_BGR2GRAY);cv::cvtColor(dst, dst, cv::COLOR_BGR2GRAY);
#endifcv::Point ofsSrcToDst(20, 200);cv::threshold(srcMask, srcMask, 128, 255, cv::THRESH_BINARY);PoissonImageEdit(src, srcMask, ofsSrcToDst, dst, true);cv::imshow("src", src);cv::imshow("dst", dst);cv::imwrite("result4.jpg", dst);cv::waitKey(0);}
#endif#if ORDER == 5{cv::Mat src = cv::imread("src_img03.jpg");cv::Mat srcMask = cv::imread("mask_img03.jpg", cv::IMREAD_GRAYSCALE);cv::Mat dst = cv::imread("tar_img03.jpg");
#if GRAYcv::cvtColor(src, src, cv::COLOR_BGR2GRAY);cv::cvtColor(dst, dst, cv::COLOR_BGR2GRAY);
#endifcv::Point ofsSrcToDst(20, 20);cv::threshold(srcMask, srcMask, 128, 255, cv::THRESH_BINARY);PoissonImageEdit(src, srcMask, ofsSrcToDst, dst, true);cv::imshow("src", src);cv::imshow("dst", dst);cv::imwrite("result5.jpg", dst);cv::waitKey(0);} 
#endif#if ORDER == 6{cv::Mat src = cv::imread("src_img04.jpg");cv::Mat srcMask = cv::imread("mask_img04.jpg", cv::IMREAD_GRAYSCALE);cv::Mat dst = cv::imread("tar_img04.jpg");if (src.empty() || srcMask.empty() || dst.empty()){cout << "read image fail!!" << endl;return -1;}
#if GRAYcv::cvtColor(src, src, cv::COLOR_BGR2GRAY);cv::cvtColor(dst, dst, cv::COLOR_BGR2GRAY);
#endifcv::Point ofsSrcToDst(20, 20);cv::threshold(srcMask, srcMask, 128, 255, cv::THRESH_BINARY);PoissonImageEdit(src, srcMask, ofsSrcToDst, dst, true);cv::imshow("src", src);cv::imshow("dst", dst);cv::imwrite("result6.jpg", dst);cv::waitKey(0);}
#endifreturn 0;
}

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/492832.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Facebook :AI 年度总结来啦

来源&#xff1a;AI 科技评论摘要&#xff1a;最近&#xff0c;Facebook 做了一份 AI 年度总结&#xff0c;详述了他们过去一年在 AI 上所做的代表性工作。在 Facebook&#xff0c;我们认为&#xff0c;人工智能以更有效的新方式学习&#xff0c;就像人类一样&#xff0c;可以在…

select,epoll的比较

机制&#xff1a; select:只支持水平触发&#xff08;数据不处理完无限通知&#xff09; epoll:支持水平触发和边缘触发&#xff08;仅通知一次&#xff09; 单进程监控FD个数 select: 由FD_SETSIZE设置&#xff0c;默认值是2048。在大量连接的情况下明显不足。 epoll&#xff…

积分图像

积分图像的大小尺寸与原图像 I(x,y)的大小尺寸相等&#xff0c;而积分图像在(x,y)处的值等于原图像中横坐标小于等于x并且纵坐标也小于等于y的所有像素灰度值之和&#xff0c;也就是在原图像中&#xff0c;从其左上角到(x,y)处所构成的矩形区域内所有像素灰度值之和。

android如何与服务器交互?

问题描述是用httpclient 还是socket 还是webservice&#xff1f;倒底哪种好用&#xff0c;哪种最常用&#xff1f;有没有好的框架介绍一下&#xff1f;解决方案11.afinal2.volley个人觉得不错的两个框架&#xff01;当然也可以自己封装。如果对以上两个框架感兴趣&#xff0c…

一文读懂可穿戴技术

来源&#xff1a;传感器技术可穿戴技术(wearable technology)&#xff0c;最早是20世纪60年代由麻省理工学院媒体实验室提出的创新技术。利用该技术&#xff0c;可以把多媒体、传感器和无线通信等技术嵌入人们的衣物中&#xff0c;可支持手势和眼动操作等多种交互方式&#xff…

C++函数指针使用总结

一 函数指针介绍 函数指针指向某种特定类型&#xff0c;函数的类型由其参数及返回类型共同决定&#xff0c;与函数名无关。举例如下&#xff1a; int add(int nLeft,int nRight);//函数定义 该函数类型为int(int,int),要想声明一个指向该类函数的指针&#xff0c;只需用指…

使用异或运算交换两个任意类型变量

这篇文章中将使用C语言,实现交换两个任意类型变量的功能.说到任意类型用C让人感觉很难做,如果是C则使用模板函数就轻松搞定: template<class T> inline void swap(T& t1, T& t2) { T tmp; tmp t1; t1 t2; t2 tmp; } 先说下使用^来交换两个整数,其代码…

2019与下一个十年:我们将要放弃的和将要拥抱的

来源&#xff1a;资本实验室2019年&#xff0c;是连接21世纪前两个十年的过渡一年。在金融支付和商业领域中&#xff0c;2019年也有望成为激动人心的一年。在这一年中&#xff0c;每家企业、每个人都需要对过去十年中所追求的创新进行反思&#xff0c;并决定下一个十年前进的方…

C++中虚函数、纯虚函数、普通函数三者的区别

转载自&#xff1a;https://www.cnblogs.com/cj2014/p/7692707.html 1.虚函数(impure virtual)   C的虚函数主要作用是“运行时多态”&#xff0c;父类中提供虚函数的实现&#xff0c;为子类提供默认的函数实现。 子类可以重写父类的虚函数实现子类的特殊化。 如下就是一…

麦肯锡发布调研,揭开“那些引入人工智能的企业都怎么了 ”

来源&#xff1a;亿欧智库摘要&#xff1a;根据麦肯锡的最新调研显示&#xff0c;人工智能技术普遍上得到企业接纳&#xff0c;但仍有不少企业在入门时就面临“不知道咋开门”的状况。新技术带来新问题&#xff0c;企业该如何应对&#xff1f;根据麦肯锡在全球范围内的调研&…

深度学习、图像识别的基本概念

图像识别 图像识别概念&#xff1a; 我们对图像进行一些列的处理&#xff0c;将其有用的信息提取出来&#xff0c;进行划分归类&#xff0c;这就是图像识别。 图像识别目的&#xff1a; 将景物、图像、字符等信息经过预处理&#xff0c;然后进行识别&#xff0c;让计算机具…

PHP-php.ini中文版

今天细看了下配置文件 有很多没用过的 就从网上搜了一篇 常看看 ;;;;;;;;;;;;;;;; 简介 ;;;;;;;;;;;;;;;;; 本文并非是对英文版 php.ini 的简单翻译&#xff0c;而是参考了众多资料以后&#xff0c;结合自己的理解&#xff0c;增加了许多内容&#xff0c;; 包括在原有 php.ini …

重磅!我国建成首个自动驾驶封闭高速公路测试环境

来源&#xff1a;智车科技摘要&#xff1a;根据工业和信息化部、公安部、江苏省人民政府共建“国家智能交通综合测试基地”的总体规划和建设要求&#xff0c;公安部交通管理科学研究所坚持“自动驾驶汽车产业发展与安全行驶并重”的指导思想&#xff0c;依据《中华人民共和国公…

线性运算和非线性运算

线性运算是加法和数量乘法&#xff0c;对于不同向量空间线性运算一般有不同的形式&#xff0c;它们必须满足交换律&#xff0c;结合律&#xff0c;数量加法的分配律&#xff0c;向量加法的分配律。线性是指量与量之间按比例、成直线的关系&#xff0c;在空间和时间上代表规则和…

拖延症讲:反向遍历链表

今天感觉被面试官用很简单的题目虐了。。。。“如何高效等反向遍历单链表” 一般情况下会想到一个很笨的方法&#xff1a;计算个数&#xff0c;然后再根据个数每一次将遍历的索引减一。 第二种方式就是将原链表反过来&#xff0c;再遍历。如果要求不改变原有结构&#xff0c;可…

单反相机内部光线反射原理

单反相机是照相机的一种&#xff0c;以独特的取景方式而命名。 它的全称是&#xff08;可换&#xff09;单镜头反光式取景照相机&#xff0c;&#xff08;Single Lens Reflex Camera&#xff0c;缩写为SLR camera&#xff09;一般简称为单反相机。它的含义是拍摄和取景共用用一…

面部识别技术走到十字路口?

来源&#xff1a;雷锋网摘要&#xff1a;向左走&#xff0c;还是向右走&#xff1f;近日&#xff0c;面部识别技术又遭遇“突发事件”。本周二&#xff0c;由90个倡议团体组成的小组给三巨头AAM&#xff08;亚马逊、谷歌、微软&#xff09;写信&#xff0c;要求三家公司承诺不向…

我与ARM的那些事儿2JINLK烧录nor flash

前言 最近在研究mini2440的友善之臂&#xff0c;但是我拿着的是实验室早期买的开发板&#xff0c;在做裸机开发的过程中老是不能很好地使用最新版的minitools进行烧录&#xff0c;因而各种不爽&#xff0c;因为生成了bin文件不能很好地传到mini2440中&#xff0c;作为一个对开…

焦距及摄像机成像

焦距,本来是一个光学中的量,当一束平行光以与凸透镜的主轴穿过凸透镜时&#xff0c;在凸透镜的另一侧会被凸透镜汇聚成一点&#xff0c;这一点叫做焦点&#xff0c;焦点到凸透镜光心的距离就叫这个凸透镜的焦距。一个凸透镜的两侧各有一个焦点。 光心&#xff1a;可以把凸透镜…

基于opencv的gpu与cpu对比程序,代码来自opencv的文档中

原文链接&#xff1a; http://www.opencv.org.cn/opencvdoc/2.3.2/html/doc/tutorials/gpu/gpu-basics-similarity/gpu-basics-similarity.html 代码中有错误&#xff0c;关于GpuMat OpenCV代码中没有对其进行操作符运算的重载&#xff0c;所有编译的时候有错误。对于GpuMat的运…