android opencv 获取小图在大图的坐标_Android开发—基于OpenCV实现相机实时图像识别跟踪...

利用OpenCV实现实时图像识别和图像跟踪

65b0721b781d40e94e04d1af735ab926.png

图像识别

什么是图像识别

图像识别,是指利用计算机对图像进行处理、分析和理解,以识别各种不同模式的目标和对像的技术。根据观测到的图像,对其中的物体分辨其类别,做出有意义的判断。利用现代信息处理与计算技术来模拟和完成人类的认识、理解过程。一般而言,一个图像识别系统主要由三个部分组成,分别是:图像分割、图像特征提取以及分类器的识别分类。

其中,图像分割将图像划分为多个有意义的区域,然后将每个区域的图像进行特征提取,最后分类器根据提取的图像特征对图像进行相对应的分类。实际上,图像识别和图像分割并不存在严格的界限。从某种意义上,图像分割的过程就是图像识别的过程。图像分割着重于对象和背景的关系,研究的是对象在特定背景下所表现出来的整体属性,而图像识别则着重于对象本身的属性。

图像识别的研究现状

图像识别的发展经历了三个阶段:文字识别、数字图像处理与识别、物体识别。

图像识别作为计算视觉技术体系中的重要一环,一直备受重视。微软在两年前就公布了一项里程碑式的成果:它的图像系统识别图片的错误率比人类还要低。如今,图像识别技术又发展到一个新高度。这有赖于更多数据的开放、更多基础工具的开源、产业链的更新迭代,以及高性能的AI计算芯片、深度摄像头和优秀的深度学习算法等的进步,这些都为图像识别技术向更深处发展提供了源源不断的动力。

其实对于图像识别技术,大家已经不陌生,人脸识别、虹膜识别、指纹识别等都属于这个范畴,但是图像识别远不只如此,它涵盖了生物识别、物体与场景识别、视频识别三大类。发展至今,尽管与理想还相距甚远,但日渐成熟的图像识别技术已开始探索在各类行业的应用。

3403259444615cb563ac958c6f11caa2.png

Android图像识别相关技术

  1. OpenCV
    基于BSD许可(开源)发行的跨平台计算机视觉库,可以运行在Linux、Windows、Android和Mac OS操作系统上。
    轻量级而且高效——由一系列 C 函数和少量 C++ 类构成,同时提供了Python、Ruby、MATLAB等语言的接口,实现了图像处理和计算机视觉方面的很多通用算法
  2. TensorFlow
    TensorFlow是一个深度学习框架,支持Linux平台,Windows平台,Mac平台,甚至手机移动设备等各种平台。
    TensorFlow提供了非常丰富的深度学习相关的API,可以说目前所有深度学习框架里,提供的API最全的,包括基本的向量矩阵计算、各种优化算法、各种卷积神经网络和循环神经网络基本单元的实现、以及可视化的辅助工具、等等。
  3. YOLO
    YOLO (You Only Look Once)是一种快速和准确的实时对象检测算法。
    YOLOv3 在 TensorFlow 中实现的完整数据管道。它可用在数据集上来训练和评估自己的目标检测模型。
  4. ……

基于OpenCV实现

介绍使用OpenCV来实现指定图像识别的DEMO:

实现思路

①打开应用的同时开启摄像头
②对实时摄像头拍摄的图像封装成MAT对象进行逐帧比对:

  1. 获取目标特征并针对各特征集获取描述符
  2. 获取两个描述符集合间的匹配项
  3. 获取参考图像和空间匹配图像间的单应性
  4. 当图像矩阵符合单应性时,绘制跟踪图像的轮廓线

代码部分

权限设置

AndroidMainifest.xml

<uses-permission android:name="android.permission.CAMERA" />

<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />

<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />

<uses-feature android:name="android.hardware.camera" />

<uses-feature

android:name="android.hardware.camera.autofocus"

android:required="false" />

<uses-feature

android:name="android.hardware.camera.flash"

android:required="false" />

权限提示方法

private void requestPermissions() {

final int REQUEST_CODE = 1;

if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {

ActivityCompat.requestPermissions(this, new String[]{

Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE},

REQUEST_CODE);

}

}

界面设计

activity_img_recognition.xml

<?xml version="1.0" encoding="utf-8"?>

<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"

xmlns:opencv="http://schemas.android.com/apk/res-auto"

xmlns:tools="http://schemas.android.com/tools"

android:id="@+id/activity_img_recognition"

android:layout_width="match_parent"

android:layout_height="match_parent"

tools:context="com.sueed.imagerecognition.CameraActivity">

<org.opencv.android.JavaCameraView

android:id="@+id/jcv"

android:layout_width="match_parent"

android:layout_height="match_parent"

android:visibility="gone"

opencv:camera_id="any"

opencv:show_fps="true" />

</RelativeLayout>

主要逻辑代码

CameraActivity.java 【相机启动获取图像和包装MAT相关】

因为OpenCV中JavaCameraView继承自SurfaceView,若有需要可以自定义编写extends SurfaceView implements SurfaceHolder.Callback的xxxSurfaceView替换使用。

package com.sueed.imagerecognition;

import android.Manifest;

import android.content.Intent;

import android.content.pm.PackageManager;

import android.os.Bundle;

import android.util.Log;

import android.view.Menu;

import android.view.MenuItem;

import android.view.SurfaceView;

import android.view.View;

import android.view.WindowManager;

import android.widget.ImageView;

import android.widget.RelativeLayout;

import android.widget.Toast;

import androidx.appcompat.app.AppCompatActivity;

import androidx.core.app.ActivityCompat;

import androidx.core.content.ContextCompat;

import com.sueed.imagerecognition.filters.Filter;

import com.sueed.imagerecognition.filters.NoneFilter;

import com.sueed.imagerecognition.filters.ar.ImageDetectionFilter;

import com.sueed.imagerecognition.imagerecognition.R;

import org.opencv.android.CameraBridgeViewBase;

import org.opencv.android.CameraBridgeViewBase.CvCameraViewFrame;

import org.opencv.android.CameraBridgeViewBase.CvCameraViewListener2;

import org.opencv.android.JavaCameraView;

import org.opencv.android.OpenCVLoader;

import org.opencv.core.Mat;

import java.io.IOException;

// Use the deprecated Camera class.

@SuppressWarnings("deprecation")

public final class CameraActivity extends AppCompatActivity implements CvCameraViewListener2 {

// A tag for log output.

private static final String TAG = CameraActivity.class.getSimpleName();

// The filters.

private Filter[] mImageDetectionFilters;

// The indices of the active filters.

private int mImageDetectionFilterIndex;

// The camera view.

private CameraBridgeViewBase mCameraView;

@Override

protected void onCreate(final Bundle savedInstanceState) {

super.onCreate(savedInstanceState);

getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);

//init CameraView

mCameraView = new JavaCameraView(this, 0);

mCameraView.setMaxFrameSize(size.MaxWidth, size.MaxHeight);

mCameraView.setCvCameraViewListener(this);

setContentView(mCameraView);

requestPermissions();

mCameraView.enableView();

}

@Override

public void onPause() {

if (mCameraView != null) {

mCameraView.disableView();

}

super.onPause();

}

@Override

public void onResume() {

super.onResume();

OpenCVLoader.initDebug();

}

@Override

public void onDestroy() {

if (mCameraView != null) {

mCameraView.disableView();

}

super.onDestroy();

}

@Override

public boolean onCreateOptionsMenu(final Menu menu) {

getMenuInflater().inflate(R.menu.activity_camera, menu);

return true;

}

@Override

public boolean onOptionsItemSelected(final MenuItem item) {

switch (item.getItemId()) {

case R.id.menu_next_image_detection_filter:

mImageDetectionFilterIndex++;

if (mImageDetectionFilters != null && mImageDetectionFilterIndex == mImageDetectionFilters.length) {

mImageDetectionFilterIndex = 0;

}

return true;

default:

return super.onOptionsItemSelected(item);

}

}

@Override

public void onCameraViewStarted(final int width, final int height) {

Filter Enkidu = null;

try {

Enkidu = new ImageDetectionFilter(CameraActivity.this, R.drawable.enkidu);

} catch (IOException e) {

e.printStackTrace();

}

Filter akbarHunting = null;

try {

akbarHunting = new ImageDetectionFilter(CameraActivity.this, R.drawable.akbar_hunting_with_cheetahs);

} catch (IOException e) {

Log.e(TAG, "Failed to load drawable: " + "akbar_hunting_with_cheetahs");

e.printStackTrace();

}

mImageDetectionFilters = new Filter[]{

new NoneFilter(),

Enkidu,

akbarHunting

};

}

@Override

public void onCameraViewStopped() {

}

@Override

public Mat onCameraFrame(final CvCameraViewFrame inputFrame) {

final Mat rgba = inputFrame.rgba();

if (mImageDetectionFilters != null) {

mImageDetectionFilters[mImageDetectionFilterIndex].apply(rgba, rgba);

}

return rgba;

}

}

ImageRecognitionFilter.java【图像特征过滤比对及绘制追踪绿框】

package com.nummist.secondsight.filters.ar;

import java.io.IOException;

import java.util.ArrayList;

import java.util.List;

import org.opencv.android.Utils;

import org.opencv.calib3d.Calib3d;

import org.opencv.core.Core;

import org.opencv.core.CvType;

import org.opencv.core.DMatch;

import org.opencv.core.KeyPoint;

import org.opencv.core.Mat;

import org.opencv.core.MatOfDMatch;

import org.opencv.core.MatOfKeyPoint;

import org.opencv.core.MatOfPoint;

import org.opencv.core.MatOfPoint2f;

import org.opencv.core.Point;

import org.opencv.core.Scalar;

import org.opencv.features2d.DescriptorExtractor;

import org.opencv.features2d.DescriptorMatcher;

import org.opencv.features2d.FeatureDetector;

import org.opencv.imgcodecs.Imgcodecs;

import org.opencv.imgproc.Imgproc;

import android.content.Context;

import com.nummist.secondsight.filters.Filter;

public final class ImageDetectionFilter implements Filter {

// The reference image (this detector's target).

private final Mat mReferenceImage;

// Features of the reference image.

private final MatOfKeyPoint mReferenceKeypoints = new MatOfKeyPoint();

// Descriptors of the reference image's features.

private final Mat mReferenceDescriptors = new Mat();

// The corner coordinates of the reference image, in pixels.

// CvType defines the color depth, number of channels, and

// channel layout in the image. Here, each point is represented

// by two 32-bit floats.

private final Mat mReferenceCorners = new Mat(4, 1, CvType.CV_32FC2);

// Features of the scene (the current frame).

private final MatOfKeyPoint mSceneKeypoints = new MatOfKeyPoint();

// Descriptors of the scene's features.

private final Mat mSceneDescriptors = new Mat();

// Tentative corner coordinates detected in the scene, in

// pixels.

private final Mat mCandidateSceneCorners = new Mat(4, 1, CvType.CV_32FC2);

// Good corner coordinates detected in the scene, in pixels.

private final Mat mSceneCorners = new Mat(0, 0, CvType.CV_32FC2);

// The good detected corner coordinates, in pixels, as integers.

private final MatOfPoint mIntSceneCorners = new MatOfPoint();

// A grayscale version of the scene.

private final Mat mGraySrc = new Mat();

// Tentative matches of scene features and reference features.

private final MatOfDMatch mMatches = new MatOfDMatch();

// A feature detector, which finds features in images.

private final FeatureDetector mFeatureDetector = FeatureDetector.create(FeatureDetector.ORB);

// A descriptor extractor, which creates descriptors of

// features.

private final DescriptorExtractor mDescriptorExtractor = DescriptorExtractor.create(DescriptorExtractor.ORB);

// A descriptor matcher, which matches features based on their

// descriptors.

private final DescriptorMatcher mDescriptorMatcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMINGLUT);

// The color of the outline drawn around the detected image.

private final Scalar mLineColor = new Scalar(0, 255, 0);

public ImageDetectionFilter(final Context context, final int referenceImageResourceID) throws IOException {

// Load the reference image from the app's resources.

// It is loaded in BGR (blue, green, red) format.

mReferenceImage = Utils.loadResource(context, referenceImageResourceID, Imgcodecs.CV_LOAD_IMAGE_COLOR);

// Create grayscale and RGBA versions of the reference image.

final Mat referenceImageGray = new Mat();

Imgproc.cvtColor(mReferenceImage, referenceImageGray, Imgproc.COLOR_BGR2GRAY);

Imgproc.cvtColor(mReferenceImage, mReferenceImage, Imgproc.COLOR_BGR2RGBA);

// Store the reference image's corner coordinates, in pixels.

mReferenceCorners.put(0, 0, new double[]{0.0, 0.0});

mReferenceCorners.put(1, 0, new double[]{referenceImageGray.cols(), 0.0});

mReferenceCorners.put(2, 0, new double[]{referenceImageGray.cols(), referenceImageGray.rows()});

mReferenceCorners.put(3, 0, new double[]{0.0, referenceImageGray.rows()});

// Detect the reference features and compute their

// descriptors.

mFeatureDetector.detect(referenceImageGray, mReferenceKeypoints);

mDescriptorExtractor.compute(referenceImageGray, mReferenceKeypoints, mReferenceDescriptors);

}

@Override

public void apply(final Mat src, final Mat dst) {

// Convert the scene to grayscale.

Imgproc.cvtColor(src, mGraySrc, Imgproc.COLOR_RGBA2GRAY);

// Detect the scene features, compute their descriptors,

// and match the scene descriptors to reference descriptors.

mFeatureDetector.detect(mGraySrc, mSceneKeypoints);

mDescriptorExtractor.compute(mGraySrc, mSceneKeypoints, mSceneDescriptors);

mDescriptorMatcher.match(mSceneDescriptors, mReferenceDescriptors, mMatches);

// Attempt to find the target image's corners in the scene.

findSceneCorners();

// If the corners have been found, draw an outline around the

// target image.

// Else, draw a thumbnail of the target image.

draw(src, dst);

}

private void findSceneCorners() {

final List<DMatch> matchesList = mMatches.toList();

if (matchesList.size() < 4) {

// There are too few matches to find the homography.

return;

}

final List<KeyPoint> referenceKeypointsList = mReferenceKeypoints.toList();

final List<KeyPoint> sceneKeypointsList = mSceneKeypoints.toList();

// Calculate the max and min distances between keypoints.

double maxDist = 0.0;

double minDist = Double.MAX_VALUE;

for (final DMatch match : matchesList) {

final double dist = match.distance;

if (dist < minDist) {

minDist = dist;

}

if (dist > maxDist) {

maxDist = dist;

}

}

// The thresholds for minDist are chosen subjectively

// based on testing. The unit is not related to pixel

// distances; it is related to the number of failed tests

// for similarity between the matched descriptors.

if (minDist > 50.0) {

// The target is completely lost.

// Discard any previously found corners.

mSceneCorners.create(0, 0, mSceneCorners.type());

return;

} else if (minDist > 25.0) {

// The target is lost but maybe it is still close.

// Keep any previously found corners.

return;

}

// Identify "good" keypoints based on match distance.

final ArrayList<Point> goodReferencePointsList = new ArrayList<Point>();

final ArrayList<Point> goodScenePointsList = new ArrayList<Point>();

final double maxGoodMatchDist = 1.75 * minDist;

for (final DMatch match : matchesList) {

if (match.distance < maxGoodMatchDist) {

goodReferencePointsList.add(referenceKeypointsList.get(match.trainIdx).pt);

goodScenePointsList.add(sceneKeypointsList.get(match.queryIdx).pt);

}

}

if (goodReferencePointsList.size() < 4 || goodScenePointsList.size() < 4) {

// There are too few good points to find the homography.

return;

}

// There are enough good points to find the homography.

// (Otherwise, the method would have already returned.)

// Convert the matched points to MatOfPoint2f format, as

// required by the Calib3d.findHomography function.

final MatOfPoint2f goodReferencePoints = new MatOfPoint2f();

goodReferencePoints.fromList(goodReferencePointsList);

final MatOfPoint2f goodScenePoints = new MatOfPoint2f();

goodScenePoints.fromList(goodScenePointsList);

// Find the homography.

final Mat homography = Calib3d.findHomography(goodReferencePoints, goodScenePoints);

// Use the homography to project the reference corner

// coordinates into scene coordinates.

Core.perspectiveTransform(mReferenceCorners, mCandidateSceneCorners, homography);

// Convert the scene corners to integer format, as required

// by the Imgproc.isContourConvex function.

mCandidateSceneCorners.convertTo(mIntSceneCorners, CvType.CV_32S);

// Check whether the corners form a convex polygon. If not,

// (that is, if the corners form a concave polygon), the

// detection result is invalid because no real perspective can

// make the corners of a rectangular image look like a concave

// polygon!

if (Imgproc.isContourConvex(mIntSceneCorners)) {

// The corners form a convex polygon, so record them as

// valid scene corners.

mCandidateSceneCorners.copyTo(mSceneCorners);

}

}

protected void draw(final Mat src, final Mat dst) {

if (dst != src) {

src.copyTo(dst);

}

if (mSceneCorners.height() < 4) {

// The target has not been found.

// Draw a thumbnail of the target in the upper-left

// corner so that the user knows what it is.

// Compute the thumbnail's larger dimension as half the

// video frame's smaller dimension.

int height = mReferenceImage.height();

int width = mReferenceImage.width();

final int maxDimension = Math.min(dst.width(), dst.height()) / 2;

final double aspectRatio = width / (double) height;

if (height > width) {

height = maxDimension;

width = (int) (height * aspectRatio);

} else {

width = maxDimension;

height = (int) (width / aspectRatio);

}

// Select the region of interest (ROI) where the thumbnail

// will be drawn.

final Mat dstROI = dst.submat(0, height, 0, width);

// Copy a resized reference image into the ROI.

Imgproc.resize(mReferenceImage, dstROI, dstROI.size(), 0.0, 0.0, Imgproc.INTER_AREA);

return;

}

// Outline the found target in green.

Imgproc.line(dst, new Point(mSceneCorners.get(0, 0)), new Point(mSceneCorners.get(1, 0)), mLineColor, 4);

Imgproc.line(dst, new Point(mSceneCorners.get(1, 0)), new Point(mSceneCorners.get(2, 0)), mLineColor, 4);

Imgproc.line(dst, new Point(mSceneCorners.get(2, 0)), new Point(mSceneCorners.get(3, 0)), mLineColor, 4);

Imgproc.line(dst, new Point(mSceneCorners.get(3, 0)), new Point(mSceneCorners.get(0, 0)), mLineColor, 4);

}

}

实现效果图

确认允许权限:

0b8680f0e4725f58cd5cacde2ce548c3.png

050e58da3d138eaa959a6e3ac83bf55a.png

实时追踪指定图像

54c304dd0cf0f542ec94ddcaa5f34f6d.png

7090d908b6128c8db2f0d07140bf32a3.png

结语

本文只实现了需要提供完整原图进行比对才能实现图像识别,还有许多更加智能方便的识别技术和方法,比如:HOG、SIFT、SURF 等方法经由正负样本库进行训练后可以从图像中提取一些特征,并通过特征确定物体类别。OpenCV库中也仍有很大一部分的功能在本文中未能进行实践,亟待今后继续探索和研究。更多Python知识请关注我分享更多!

本文转载于:Android开发-基于OpenCV实现相机实时图像识别跟踪_Sueed-CSDN博客

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/245223.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

三菱a系列motion软体_三菱M70A/64SM重要功能比较

三菱M70A/64SM重要功能比较M70A特有功能&#xff0c;64SM无法作到的功能往 期 精 选 1>三菱M70系统全清操作步骤2>三菱M70系统 程序传输操作步骤3>三菱M70分中对刀操作步骤4>三菱M70设置加工条件选择 介绍5>三菱M70系统 原点设定方法6>三菱M70/M700 用户参数…

centos 卸载_CentOS「linux」学习笔记12:磁盘管理、分区挂载卸载操作

linux基础操作&#xff1a;主要介绍了磁盘管理、分区挂载卸载操作。特别说明linux中磁盘表现形式:IDE硬盘在linux中表示方式为"hdx"。SCSI硬盘在linux中表示方式为"sdx"。这里的x代表磁盘号[a代表基本主磁盘(主盘)对应数字表示&#xff1a;1&#xff0c;b代…

html制作翻页效果代码,使用原生JS实现滚轮翻页效果的示例代码

一、滚轮事件当用户通过鼠标滚轮与页面交互、在垂直方向上滚动页面时&#xff0c;就会触发mousewheel事件&#xff0c;这个事件就是实现全屏切换效果需要用到的。在IE6, IE7, IE8, Opera 10, Safari 5中&#xff0c;都提供了 “mousewheel” 事件&#xff0c;而 Firefox 3.5 中…

python leetcode_Leetcode 常用算法 Python 模板

小 trickoverlap条件&#xff1a;start1 < end2 and end1 > start2 在DFS中我们说关键点是递归以及回溯&#xff0c;在BFS中&#xff0c;关键点则是状态的选取和标记树算法Binary Indexed Tree BIT 树状数组class BIT:def __init__(self, n):self.n n 1self.sums [0] …

画瀑布图_常见的招财风水画之含义

点击上方【觉悟法华】关注 风水画是指利于风水的字画&#xff0c;能起到招财、旺运、化煞等等的风水作用。那么&#xff0c;常见的招财风水画有哪些含义&#xff1f;大鹏展翅图&#xff1a;大鹏展翅图&#xff0c;通常挂在书房或者客厅&#xff0c;给人以一种“鹏程万里”、积极…

荣耀play4 pro怎么升级鸿蒙系统,华为鸿蒙系统手机型号有哪些

华为鸿蒙系统支持的手机型号有很多&#xff0c;如果你想第一时间升级鸿蒙系统&#xff0c;需要申请内测后&#xff0c;才能够下载安装升级哦&#xff01;不知道如何操作的小伙伴们&#xff0c;一起来看看趣丁网带来的华为鸿蒙os2.0系统怎么升级教程吧&#xff01;一、华为鸿蒙系…

shell脚本中取消高亮显示_Linux中强大的top命令

top命令算是最直观、好用的查看服务器负载的命令了。它实时动态刷新显示服务器状态信息&#xff0c;且可以通过交互式命令自定义显示内容&#xff0c;非常强大。在终端中输入top&#xff0c;回车后会显示如下内容&#xff1a;一、系统信息统计前五行是系统整体状态的统计信息展…

body onload 控制窗口大小 html,HTML5 对各个标签的定义与规定:body的介绍

HTML5 对各个标签的定义与规定&#xff1a;body的介绍2019年07月25日| 萬仟网IT编程| 我要评论本文主要介绍body标签... 12-06-21body元素就是就是html文档的主内容标签。可设置属性onafterprint 在打印文档之后运行脚本onbeforeprint 在文档打印之前运行脚本onbeforeonload 在…

html5手机电商网页设计代码_Html5网站制作,干货!20个视觉体验和内容俱佳的优秀网页设计...

如何创建一个网页&#xff1f;“Html5网站制作”和“灵感干货&#xff01;20个视觉、体验和内容俱佳的优秀网页设计”有什么关系和内在关联&#xff1f;在图片方面&#xff0c;有三个具体方案&#xff1a;图片地图、Css Sprites、内联图片三种&#xff0c;最值得关注的是 Css S…

2021 高考 成绩查询,精准预测!2021全国大学录取分数线表查询

高考分数对应大学层次等级随着各大高校的疯狂扩招&#xff0c;大学的门槛近年来越来越低&#xff0c;虽然还不至于达到普及大学的程度&#xff0c;但对于成绩不是太差的高中生而言&#xff0c;上大学确实是一件轻松加愉快的事情。在高考总分750的情况下&#xff0c;文科生551分…

python图像处理opencv_使用Python+OpenCV进行图像处理(二)| 视觉入门

【前言】图像预处理对于整个图像处理任务来讲特别重要。如果我们没有进行恰当的预处理&#xff0c;无论我们有多么好的数据也很难得到理想的结果。 本篇是视觉入门系列教程的第二篇。整个视觉入门系列内容如下&#xff1a; 基本的图像处理与滤波技术。 从特征检测到人脸检测。 …

html文本设置float,css怎么float(浮动)?

在css中&#xff0c;浮动是一种使元素脱离文档流的方法&#xff0c;会使元素向左或向右移动&#xff0c;其周围的元素也会重新排列。Float(浮动)&#xff0c;往往是用于图像&#xff0c;但它在布局时一样非常有用。浮动是一种非常有用的布局方式&#xff0c;它能够改变页面中对…

字符动图_手把手教你做一个python+matplotlib的炫酷的数据可视化动图

1.数据可视化动图&#xff0c;是数据可视化的高级显示&#xff0c;最近很流行。2.比如下面将告诉你如何制作一个如下的数据可视化动图。3.例&#xff1a;3.1 准备一组数据&#xff0c;虚拟的csv资料&#xff0c;对应关系如下4个项目&#xff1a;namegroupyearvaluename&#xf…

weblogic jms消息 删除_消息队列与消息中间件概述:消息中间件核心概念与技术选型...

什么是消息&#xff1f;“消息”是在两台计算机间传送的数据单位。消息可以非常简单&#xff0c;例如只包含文本字符串&#xff1b;也可以更复杂&#xff0c;可能包含嵌入对象。什么是队列&#xff1f;队列(Queue)队列是一种先进先出(FIFO)的数据结构。什么是消息队列&#xff…

伽马分布极大似然估计_一文通俗解释极大似然估计

我们都知道机器学习的大致流程是通过建立一个合理的模型学习现有数据集&#xff0c;然后通过该模型去完成特定的任务。其中每个模型都包含自身的一组特定参数&#xff0c;而这组参数决定着模型的本身。但这里存在一个很关键的一个问题&#xff0c;就是我们如何去找到一组参数使…

python3.5安装pygame_python怎么安装pygame

Pygame 是一种流行的 Python 包&#xff0c;用于编写游戏-鼓励学生学习编程&#xff0c;同时创建有趣的东西。 Pygame 在新窗口中显示图形&#xff0c;因此它将 无法在 WSL 的命令行方法下运行。 但是&#xff0c;如果您通过本教程中所述的 Microsoft Store 安装了 Python&…

所属的用户_关于chmod(变更用户对此文件的相关权限)超详细说明,小白秒懂

Linux下一切都是文件,通过ls -l或者别名ll可以查看文件的详细信息:drwxr-xr-x 第一个字符d指的是目录文件;第2-4个字符rwx&#xff1a;指的是u(user,owner)对这个文件具有可读可写可执行的权限;第5-7字符r-x&#xff1a;指的是g(group)对这个文件具有可读可执行权限&#xff1b…

cad线性标注命令_CAD常用标注快捷键和命令

点击上方 “CAD自学网 ” → 点击右上角“...” → 点选“设为星标 ★ ”为CAD自学网加上星标&#xff0c;即可及时收到干货啦&#xff01;左下角阅读原文看CAD视频站长推荐&#xff1a;1、CAD2014快速精通进阶提高教程&#xff1a;点击查看 2、室内设计全屋定制全套视频教程&…

计算机怎么设置网络共享,局域网共享设置,教您电脑怎么设置局域网共享

前两天&#xff0c;遇到位朋友说他刚买了台新的电脑&#xff0c;加上原来家里原有的两台电脑了&#xff0c;就三台了&#xff0c;现在想要三台电脑都能够进行一个共享职员的这么设置&#xff0c;就是不知道如何在局域网里怎么设置共享&#xff0c;下面&#xff0c;小编就来跟大…

html引用外部导入式css文件夹,css文件内引用外部资源文件的相对路径

1.default.css文件内容(位于css文件夹下)&#xff1a;.ClassName .ClassName .ClassName.page-sidebar .sidebar-search .submit {--该图片相对于css文件所在的位置。不是使用本css文件的html文件位置。background-image: url(../image/search-icon.png);}2.使用本css文件的htm…