android人脸识别的背景图_Android原生人脸识别Camera2+FaceDetector 快速实现人脸跟踪...

title: Android原生人脸识别Camera2+FaceDetector 快速实现人脸跟踪

categories:

Android

tags:

人脸识别

FaceDetector

Camera2

date: 2020-05-27 14:02:13

本篇主要介绍Android原生Api人脸检测FaceDetector的使用,该方法检测人脸可以

做到的是,检测到屏幕有无人脸,有多少个人脸,人脸的双眼眉心位置2d坐标,双眼间距,

但是本人测到该方法的坑,检测有无人脸确实好用,但是如果要精确的测量人脸位置,距离等,会有偏差,毕竟是2d坐标,对现实

定位不准确,我感觉可以这样理解,

然后大家如果要实现该功能的时候,如果这些不够用,就不用考虑该方法了。

废话不多说,实现开始,

实现

1.首先可以实现一个自定义view用来在屏幕上画方框

class FaceView : View {

lateinit var mPaint: Paint

private var mCorlor = "#42ed45"

private var mFaces: ArrayList? = null

constructor(context: Context) : super(context) {

init()

}

constructor(context: Context, attrs: AttributeSet?) : super(context, attrs) {

init()

}

constructor(context: Context, attrs: AttributeSet?, defStyleAttr: Int) : super(context, attrs, defStyleAttr) {

init()

}

private fun init() {

mPaint = Paint()

mPaint.color = Color.parseColor(mCorlor)

mPaint.style = Paint.Style.STROKE

mPaint.strokeWidth = TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, 3f, context.resources.displayMetrics)

mPaint.isAntiAlias = true

}

override fun onDraw(canvas: Canvas) {

super.onDraw(canvas)

mFaces?.let {

for (face in it) {

canvas.drawRect(face, mPaint)

}

}

}

fun setFaces(faces: ArrayList) {

this.mFaces = faces

invalidate()

}

}

imageUtil用来处理返回的基础格式

/**

* Author: Sar_Wang

* Date: 2020/5/11 3:40 PM

* Description:

*/

public class ImageUtil {

/**

* 将Y:U:V == 4:2:2的数据转换为nv21

*

* @param y Y 数据

* @param u U 数据

* @param v V 数据

* @param nv21 生成的nv21,需要预先分配内存

* @param stride 步长

* @param height 图像高度

*/

public static void yuv422ToYuv420sp(byte[] y, byte[] u, byte[] v, byte[] nv21, int stride, int height) {

System.arraycopy(y, 0, nv21, 0, y.length);

// 注意,若length值为 y.length * 3 / 2 会有数组越界的风险,需使用真实数据长度计算

int length = y.length + u.length / 2 + v.length / 2;

int uIndex = 0, vIndex = 0;

for (int i = stride * height; i < length; i += 2) {

nv21[i] = v[vIndex];

nv21[i + 1] = u[uIndex];

vIndex += 2;

uIndex += 2;

}

}

/**

* 将Y:U:V == 4:1:1的数据转换为nv21

*

* @param y Y 数据

* @param u U 数据

* @param v V 数据

* @param nv21 生成的nv21,需要预先分配内存

* @param stride 步长

* @param height 图像高度

*/

public static void yuv420ToYuv420sp(byte[] y, byte[] u, byte[] v, byte[] nv21, int stride, int height) {

System.arraycopy(y, 0, nv21, 0, y.length);

// 注意,若length值为 y.length * 3 / 2 会有数组越界的风险,需使用真实数据长度计算

int length = y.length + u.length + v.length;

int uIndex = 0, vIndex = 0;

for (int i = stride * height; i < length; i++) {

nv21[i] = v[vIndex++];

nv21[i + 1] = u[uIndex++];

}

}

}

然后是调用相机的activity的布局

xmlns:app="http://schemas.android.com/apk/res-auto"

xmlns:tools="http://schemas.android.com/tools"

android:layout_width="match_parent"

android:layout_height="match_parent">

android:id="@+id/textureView"

android:layout_width="match_parent"

android:layout_height="match_parent"/>

android:id="@+id/switch_Camera"

android:layout_gravity="end|bottom"

android:layout_marginBottom="90dp"

android:layout_marginEnd="40dp"

android:text="切换摄像头"

android:layout_width="wrap_content"

android:layout_height="wrap_content" />

android:id="@+id/faceView"

android:layout_width="match_parent"

android:layout_height="match_parent"/>

然后这里剩的麻烦,用了一个大神写的相机辅助类,感兴趣的可以看一下源码

public class Camera2Helper {

private static final String TAG = "Camera2Helper";

private Point maxPreviewSize;

private Point minPreviewSize;

public static final String CAMERA_ID_FRONT = "1";

public static final String CAMERA_ID_BACK = "0";

private String mCameraId;

private String specificCameraId;

private Camera2Listener camera2Listener;

private TextureView mTextureView;

private int rotation;

private Point previewViewSize;

private Point specificPreviewSize;

private boolean isMirror;

private Context context;

private boolean mCalibrated;

private boolean mIsVertical = true;

/**

* A {@link CameraCaptureSession } for camera preview.

*/

private CameraCaptureSession mCaptureSession;

/**

* A reference to the opened {@link CameraDevice}.

*/

private CameraDevice mCameraDevice;

private Size mPreviewSize;

private Camera2Helper(Camera2Helper.Builder builder) {

mTextureView = builder.previewDisplayView;

specificCameraId = builder.specificCameraId;

camera2Listener = builder.camera2Listener;

rotation = builder.rotation;

previewViewSize = builder.previewViewSize;

specificPreviewSize = builder.previewSize;

maxPreviewSize = builder.maxPreviewSize;

minPreviewSize = builder.minPreviewSize;

isMirror = builder.isMirror;

context = builder.context;

if (isMirror) {

mTextureView.setScaleX(-1);

}

}

public void setConfiguration(boolean val) {

mIsVertical = val;

}

public void switchCamera() {

if (CAMERA_ID_BACK.equals(mCameraId)) {

specificCameraId = CAMERA_ID_FRONT;

} else if (CAMERA_ID_FRONT.equals(mCameraId)) {

specificCameraId = CAMERA_ID_BACK;

}

stop();

start();

}

private int getCameraOri(int rotation, String cameraId) {

int degrees = rotation * 90;

switch (rotation) {

case Surface.ROTATION_0:

degrees = 0;

break;

case Surface.ROTATION_90:

degrees = 90;

break;

case Surface.ROTATION_180:

degrees = 180;

break;

case Surface.ROTATION_270:

degrees = 270;

break;

default:

break;

}

int result;

if (CAMERA_ID_FRONT.equals(cameraId)) {

result = (mSensorOrientation + degrees) % 360;

result = (360 - result) % 360;

} else {

result = (mSensorOrientation - degrees + 360) % 360;

}

Log.i(TAG, "getCameraOri: " + rotation + " " + result + " " + mSensorOrientation);

return result;

}

private final TextureView.SurfaceTextureListener mSurfaceTextureListener

= new TextureView.SurfaceTextureListener() {

@Override

public void onSurfaceTextureAvailable(SurfaceTexture texture, int width, int height) {

Log.i(TAG, "onSurfaceTextureAvailable: ");

openCamera();

}

@Override

public void onSurfaceTextureSizeChanged(SurfaceTexture texture, int width, int height) {

Log.i(TAG, "onSurfaceTextureSizeChanged: ");

configureTransform(width, height);

}

@Override

public boolean onSurfaceTextureDestroyed(SurfaceTexture texture) {

Log.i(TAG, "onSurfaceTextureDestroyed: ");

return true;

}

@Override

public void onSurfaceTextureUpdated(SurfaceTexture texture) {

}

};

private CameraDevice.StateCallback mDeviceStateCallback = new CameraDevice.StateCallback() {

@Override

public void onOpened(@NonNull CameraDevice cameraDevice) {

Log.i(TAG, "onOpened: ");

// This method is called when the camera is opened. We start camera preview here.

mCameraOpenCloseLock.release();

mCameraDevice = cameraDevice;

createCameraPreviewSession();

if (camera2Listener != null) {

camera2Listener.onCameraOpened(cameraDevice, mCameraId, mPreviewSize, getCameraOri(rotation, mCameraId), isMirror);

}

}

@Override

public void onDisconnected(@NonNull CameraDevice cameraDevice) {

Log.i(TAG, "onDisconnected: ");

mCameraOpenCloseLock.release();

cameraDevice.close();

mCameraDevice = null;

if (camera2Listener != null) {

camera2Listener.onCameraClosed();

}

}

@Override

public void onError(@NonNull CameraDevice cameraDevice, int error) {

Log.i(TAG, "onError: ");

mCameraOpenCloseLock.release();

cameraDevice.close();

mCameraDevice = null;

if (camera2Listener != null) {

camera2Listener.onCameraError(new Exception("error occurred, code is " + error));

}

}

};

private CameraCaptureSession.StateCallback mCaptureStateCallback = new CameraCaptureSession.StateCallback() {

@Override

public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {

Log.i(TAG, "onConfigured: ");

// The camera is already closed

if (null == mCameraDevice) {

return;

}

// When the session is ready, we start displaying the preview.

mCaptureSession = cameraCaptureSession;

try {

mCaptureSession.setRepeatingRequest(mPreviewRequestBuilder.build(),

mCaptureCallBack, mBackgroundHandler);

} catch (CameraAccessException e) {

e.printStackTrace();

}

}

@Override

public void onConfigureFailed(

@NonNull CameraCaptureSession cameraCaptureSession) {

Log.i(TAG, "onConfigureFailed: ");

if (camera2Listener != null) {

camera2Listener.onCameraError(new Exception("configureFailed"));

}

}

};

private CameraCaptureSession.CaptureCallback mCaptureCallBack = new CameraCaptureSession.CaptureCallback(){

@Override

public void onCaptureCompleted(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull TotalCaptureResult result) {

super.onCaptureCompleted(session, request, result);

camera2Listener.onHandleFaces(result);

}

@Override

public void onCaptureFailed(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull CaptureFailure failure) {

super.onCaptureFailed(session, request, failure);

}

};

/**

* An additional thread for running tasks that shouldn't block the UI.

*/

private HandlerThread mBackgroundThread;

/**

* A {@link Handler} for running tasks in the background.

*/

private Handler mBackgroundHandler;

private ImageReader mImageReader;

/**

* {@link CaptureRequest.Builder} for the camera preview

*/

private CaptureRequest.Builder mPreviewRequestBuilder;

/**

* A {@link Semaphore} to prevent the app from exiting before closing the camera.

*/

private Semaphore mCameraOpenCloseLock = new Semaphore(1);

/**

* Orientation of the camera sensor

*/

private int mSensorOrientation;

private Size getBestSupportedSize(List sizes) {

Size defaultSize = sizes.get(0);

Size[] tempSizes = sizes.toArray(new Size[0]);

Arrays.sort(tempSizes, new Comparator() {

@Override

public int compare(Size o1, Size o2) {

if (o1.getWidth() > o2.getWidth()) {

return -1;

} else if (o1.getWidth() == o2.getWidth()) {

return o1.getHeight() > o2.getHeight() ? -1 : 1;

} else {

return 1;

}

}

});

sizes = new ArrayList<>(Arrays.asList(tempSizes));

for (int i = sizes.size() - 1; i >= 0; i--) {

if (maxPreviewSize != null) {

if (sizes.get(i).getWidth() > maxPreviewSize.x || sizes.get(i).getHeight() > maxPreviewSize.y) {

sizes.remove(i);

continue;

}

}

if (minPreviewSize != null) {

if (sizes.get(i).getWidth() < minPreviewSize.x || sizes.get(i).getHeight() < minPreviewSize.y) {

sizes.remove(i);

}

}

}

if (sizes.size() == 0) {

String msg = "can not find suitable previewSize, now using default";

if (camera2Listener != null) {

Log.e(TAG, msg);

camera2Listener.onCameraError(new Exception(msg));

}

return defaultSize;

}

Size bestSize = sizes.get(0);

float previewViewRatio;

if (previewViewSize != null) {

previewViewRatio = (float) previewViewSize.x / (float) previewViewSize.y;

} else {

previewViewRatio = (float) bestSize.getWidth() / (float) bestSize.getHeight();

}

if (previewViewRatio > 1) {

previewViewRatio = 1 / previewViewRatio;

}

for (Size s : sizes) {

if (specificPreviewSize != null && specificPreviewSize.x == s.getWidth() && specificPreviewSize.y == s.getHeight()) {

return s;

}

if (Math.abs((s.getHeight() / (float) s.getWidth()) - previewViewRatio) < Math.abs(bestSize.getHeight() / (float) bestSize.getWidth() - previewViewRatio)) {

bestSize = s;

}

}

return bestSize;

}

public synchronized void start() {

if (mCameraDevice != null) {

return;

}

startBackgroundThread();

// When the screen is turned off and turned back on, the SurfaceTexture is already

// available, and "onSurfaceTextureAvailable" will not be called. In that case, we can open

// a camera and start preview from here (otherwise, we wait until the surface is ready in

// the SurfaceTextureListener).

if (mTextureView.isAvailable()) {

openCamera();

} else {

mTextureView.setSurfaceTextureListener(mSurfaceTextureListener);

}

}

public synchronized void stop() {

if (mCameraDevice == null) {

return;

}

closeCamera();

stopBackgroundThread();

}

public void release() {

stop();

mTextureView = null;

camera2Listener = null;

context = null;

}

private void setUpCameraOutputs(CameraManager cameraManager) {

try {

if (configCameraParams(cameraManager, specificCameraId)) {

return;

}

for (String cameraId : cameraManager.getCameraIdList()) {

if (configCameraParams(cameraManager, cameraId)) {

return;

}

}

} catch (CameraAccessException e) {

e.printStackTrace();

} catch (NullPointerException e) {

// Currently an NPE is thrown when the Camera2API is used but not supported on the

// device this code runs.

if (camera2Listener != null) {

camera2Listener.onCameraError(e);

}

}

}

private boolean configCameraParams(CameraManager manager, String cameraId) throws CameraAccessException {

CameraCharacteristics characteristics

= manager.getCameraCharacteristics(cameraId);

StreamConfigurationMap map = characteristics.get(

CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);

if (map == null) {

return false;

}

mPreviewSize = getBestSupportedSize(new ArrayList(Arrays.asList(map.getOutputSizes(SurfaceTexture.class))));

mImageReader = ImageReader.newInstance(mPreviewSize.getWidth(), mPreviewSize.getHeight(),

ImageFormat.YUV_420_888, 2);

mImageReader.setOnImageAvailableListener(

new OnImageAvailableListenerImpl(), mBackgroundHandler);

mSensorOrientation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);

mCameraId = cameraId;

return true;

}

private void openCamera() {

CameraManager cameraManager = (CameraManager) context.getSystemService(Context.CAMERA_SERVICE);

setUpCameraOutputs(cameraManager);

configureTransform(mTextureView.getWidth(), mTextureView.getHeight());

try {

if (!mCameraOpenCloseLock.tryAcquire(2500, TimeUnit.MILLISECONDS)) {

throw new RuntimeException("Time out waiting to lock camera opening.");

}

cameraManager.openCamera(mCameraId, mDeviceStateCallback, mBackgroundHandler);

} catch (CameraAccessException e) {

if (camera2Listener != null) {

camera2Listener.onCameraError(e);

}

} catch (InterruptedException e) {

if (camera2Listener != null) {

camera2Listener.onCameraError(e);

}

}

}

/**

* Closes the current {@link CameraDevice}.

*/

private void closeCamera() {

try {

mCameraOpenCloseLock.acquire();

if (null != mCaptureSession) {

mCaptureSession.close();

mCaptureSession = null;

}

if (null != mCameraDevice) {

mCameraDevice.close();

mCameraDevice = null;

}

if (null != mImageReader) {

mImageReader.close();

mImageReader = null;

}

if (camera2Listener != null) {

camera2Listener.onCameraClosed();

}

} catch (InterruptedException e) {

if (camera2Listener != null) {

camera2Listener.onCameraError(e);

}

} finally {

mCameraOpenCloseLock.release();

}

}

/**

* Starts a background thread and its {@link Handler}.

*/

private void startBackgroundThread() {

mBackgroundThread = new HandlerThread("CameraBackground");

mBackgroundThread.start();

mBackgroundHandler = new Handler(mBackgroundThread.getLooper());

}

/**

* Stops the background thread and its {@link Handler}.

*/

private void stopBackgroundThread() {

mBackgroundThread.quitSafely();

try {

mBackgroundThread.join();

mBackgroundThread = null;

mBackgroundHandler = null;

} catch (InterruptedException e) {

e.printStackTrace();

}

}

/**

* Creates a new {@link CameraCaptureSession} for camera preview.

*/

private void createCameraPreviewSession() {

try {

SurfaceTexture texture = mTextureView.getSurfaceTexture();

assert texture != null;

// We configure the size of default buffer to be the size of camera preview we want.

texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());

// This is the output Surface we need to start preview.

Surface surface = new Surface(texture);

// We set up a CaptureRequest.Builder with the output Surface.

mPreviewRequestBuilder

= mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);

mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,

CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);

mPreviewRequestBuilder.addTarget(surface);

mPreviewRequestBuilder.addTarget(mImageReader.getSurface());

// Here, we create a CameraCaptureSession for camera preview.

mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),

mCaptureStateCallback, mBackgroundHandler

);

} catch (CameraAccessException e) {

e.printStackTrace();

}

}

/**

* Configures the necessary {@link Matrix} transformation to `mTextureView`.

* This method should be called after the camera preview size is determined in

* setUpCameraOutputs and also the size of `mTextureView` is fixed.

*

* @param viewWidth The width of `mTextureView`

* @param viewHeight The height of `mTextureView`

*/

private void configureTransform(int viewWidth, int viewHeight) {

if (null == mTextureView || null == mPreviewSize) {

return;

}

Matrix matrix = new Matrix();

RectF viewRect = new RectF(0, 0, viewWidth, viewHeight);

RectF bufferRect = new RectF(0, 0, mPreviewSize.getHeight(), mPreviewSize.getWidth());

float centerX = viewRect.centerX();

float centerY = viewRect.centerY();

if (Surface.ROTATION_90 == rotation || Surface.ROTATION_270 == rotation) {

bufferRect.offset(centerX - bufferRect.centerX(), centerY - bufferRect.centerY());

matrix.setRectToRect(viewRect, bufferRect, Matrix.ScaleToFit.FILL);

float scale = Math.max(

(float) viewHeight / mPreviewSize.getHeight(),

(float) viewWidth / mPreviewSize.getWidth());

matrix.postScale(scale, scale, centerX, centerY);

matrix.postRotate((90 * (rotation - 2)) % 360, centerX, centerY);

} else if (Surface.ROTATION_180 == rotation) {

matrix.postRotate(180, centerX, centerY);

}

Log.i(TAG, "configureTransform: " + getCameraOri(rotation, mCameraId) + " " + rotation * 90);

mTextureView.setTransform(matrix);

}

public static final class Builder {

/**

* 预览显示的view,目前仅支持textureView

*/

private TextureView previewDisplayView;

/**

* 是否镜像显示,只支持textureView

*/

private boolean isMirror;

/**

* 指定的相机ID

*/

private String specificCameraId;

/**

* 事件回调

*/

private Camera2Listener camera2Listener;

/**

* 屏幕的长宽,在选择最佳相机比例时用到

*/

private Point previewViewSize;

/**

* 传入getWindowManager().getDefaultDisplay().getRotation()的值即可

*/

private int rotation;

/**

* 指定的预览宽高,若系统支持则会以这个预览宽高进行预览

*/

private Point previewSize;

/**

* 最大分辨率

*/

private Point maxPreviewSize;

/**

* 最小分辨率

*/

private Point minPreviewSize;

/**

* 上下文,用于获取CameraManager

*/

private Context context;

public Builder() {

}

public Builder previewOn(TextureView val) {

previewDisplayView = val;

return this;

}

public Builder isMirror(boolean val) {

isMirror = val;

return this;

}

public Builder previewSize(Point val) {

previewSize = val;

return this;

}

public Builder maxPreviewSize(Point val) {

maxPreviewSize = val;

return this;

}

public Builder minPreviewSize(Point val) {

minPreviewSize = val;

return this;

}

public Builder previewViewSize(Point val) {

previewViewSize = val;

return this;

}

public Builder rotation(int val) {

rotation = val;

return this;

}

public Builder specificCameraId(String val) {

specificCameraId = val;

return this;

}

public Builder cameraListener(Camera2Listener val) {

camera2Listener = val;

return this;

}

public Builder context(Context val) {

context = val;

return this;

}

public Camera2Helper build() {

if (previewViewSize == null) {

Log.e(TAG, "previewViewSize is null, now use default previewSize");

}

if (camera2Listener == null) {

Log.e(TAG, "camera2Listener is null, callback will not be called");

}

if (previewDisplayView == null) {

throw new NullPointerException("you must preview on a textureView or a surfaceView");

}

if (maxPreviewSize != null && minPreviewSize != null) {

if (maxPreviewSize.x < minPreviewSize.x || maxPreviewSize.y < minPreviewSize.y) {

throw new IllegalArgumentException("maxPreviewSize must greater than minPreviewSize");

}

}

return new Camera2Helper(this);

}

}

private class OnImageAvailableListenerImpl implements ImageReader.OnImageAvailableListener {

private byte[] y;

private byte[] u;

private byte[] v;

private ReentrantLock lock = new ReentrantLock();

@Override

public void onImageAvailable(ImageReader reader) {

Image image = reader.acquireNextImage();

// Y:U:V == 4:2:2

if (camera2Listener != null && image.getFormat() == ImageFormat.YUV_420_888) {

Image.Plane[] planes = image.getPlanes();

// 加锁确保y、u、v来源于同一个Image

lock.lock();

// 重复使用同一批byte数组,减少gc频率

if (y == null) {

y = new byte[planes[0].getBuffer().limit() - planes[0].getBuffer().position()];

u = new byte[planes[1].getBuffer().limit() - planes[1].getBuffer().position()];

v = new byte[planes[2].getBuffer().limit() - planes[2].getBuffer().position()];

}

if (image.getPlanes()[0].getBuffer().remaining() == y.length) {

planes[0].getBuffer().get(y);

planes[1].getBuffer().get(u);

planes[2].getBuffer().get(v);

camera2Listener.onPreview(y, u, v, mPreviewSize, planes[0].getRowStride());

}

lock.unlock();

}

image.close();

}

}

}

然后初始化后绑定布局

texture_preview.viewTreeObserver.addOnGlobalLayoutListener(this)

override fun onGlobalLayout() {

texture_preview.viewTreeObserver.removeOnGlobalLayoutListener(this)

if (!checkPermissions(NEEDED_PERMISSIONS)) {

ActivityCompat.requestPermissions(this, NEEDED_PERMISSIONS, ACTION_REQUEST_PERMISSIONS)

} else {

initCamera()

}

}

初始化相机

camera2Helper = Camera2Helper.Builder()

.cameraListener(this)

.maxPreviewSize(Point(1920, 1080))

.minPreviewSize(Point(1280, 720))

.specificCameraId(CAMERA_ID)

.context(applicationContext)

.previewOn(texture_preview)

.previewViewSize(Point(texture_preview.width,

texture_preview.height))

.rotation(windowManager.defaultDisplay.rotation)

.build()

camera2Helper.start()

然后在相机的回调里面,我们看看做了什么,首先是相机启动的时候

override fun onCameraOpened(

cameraDevice: CameraDevice?,

cameraId: String?,

previewSize: Size?,

displayOrientation: Int,

isMirror: Boolean

) {

Log.i("Wzz", "onCameraOpened: previewSize = ${previewSize?.width} x ${previewSize?.height}")

mDisplayOrientation = displayOrientation

isMirrorPreview = isMirror

openedCameraId = cameraId

}

然后重要的就是preview里面返回的yuv原始数据

if (!this::nv21.isInitialized) {

nv21 = ByteArray(stride * previewSize!!.height * 3 / 2)

}

// 回传数据是YUV422

if (y!!.size / u!!.size == 2) {

ImageUtil.yuv422ToYuv420sp(y, u, v, nv21, stride, previewSize!!.height)

} else if (y.size / u.size == 4) {

ImageUtil.yuv420ToYuv420sp(y, u, v, nv21, stride, previewSize!!.height)

}

val yuvImage = YuvImage(nv21, ImageFormat.NV21, stride, previewSize!!.height, null)

然后转换nv21

YuvImage yuvimage = new YuvImage(_data, ImageFormat.NV21,

_previewSize.getWidth(), _previewSize.getHeight(), null);

再继续转换为rgb_565格式

ByteArrayOutputStream baos = new ByteArrayOutputStream();

BitmapFactory.Options bfo = new BitmapFactory.Options();

bfo.inPreferredConfig = Bitmap.Config.RGB_565;

Bitmap _currentFrame = BitmapFactory.decodeStream(new ByteArrayInputStream(baos.toByteArray()), null, bfo);

如果需要转换方向

Matrix matrix = new Matrix();

if(mIsVertical){

matrix.postRotate(90);

matrix.preScale(-1, 1); //Android内置人脸识别的图像必须是头在上,所以要做旋转变换

// We rotate the same Bitmap

_currentFrame = Bitmap.createBitmap(_currentFrame, 0, 0,

_previewSize.getWidth(), _previewSize.getHeight(), matrix, false);

}

然后就可以用faceDetector来进行检测了

FaceDetector d = new FaceDetector(

_currentFrame.getWidth(),

_currentFrame.getHeight(),

1);

Face[] faces = new Face[1];

d.findFaces(_currentFrame, faces);

接下来就可以自己对face进行判断处理进行自己需要的操作了

然后介绍如何绘制人脸位置方框

private fun handleFaces(face: FaceDetector.Face) {

var pointF = PointF()

face.getMidPoint(pointF)

mFacesRect.clear()

val widthp = texture_preview.width/height

val heightP = texture_preview.height/width

val spec = face.eyesDistance() / heightP

val bounds = pointF

val y = bounds.y * heightP

val x = bounds.x * widthp

val left = x - spec

val top = y - spec

val right = x + spec

val bottom = y + spec

val rawFaceRect = RectF(left.toFloat(), top.toFloat(), right.toFloat(), bottom.toFloat())

// val rawFaceRect3 = RectF(0f, 0f, 10f, 20f)

val rawFaceRect3 = RectF( 0f,

0f,

texture_preview.width.toFloat(),

texture_preview.height.toFloat())

mFaceDetectMatrix.mapRect(rawFaceRect)

Log.d("wzz","prewview: ${width} * ${height}")

Log.d("wzz","texture_preview: ${texture_preview.width} * ${texture_preview.height}")

Log.d("wzz","texture_preview: ${texture_preview.top} * ${texture_preview.left} --- ${texture_preview.right}---${texture_preview.bottom}")

val resultFaceRect = rawFaceRect

mFacesRect.add(resultFaceRect)

mFacesRect.add(rawFaceRect3)

Log.d("wzz","原始人脸位置: ${bounds.x} * ${bounds.y} ----${face.eyesDistance()} ")

Log.d("wzz","转换后人脸位置: ${resultFaceRect.width()} * ${resultFaceRect.height()} ${resultFaceRect.left} ${resultFaceRect.top} ${resultFaceRect.right} ${resultFaceRect.bottom} ")

runOnUiThread {

faceView.setFaces(mFacesRect)

}

}

然后具体的参数,大家就可以调试着玩了,

后续会推出人脸识别opencv方案,

1.最近要研究一下opencv 2d人脸模型转3d

2.以及arcore的人脸增强玩法

大家有什么问题可以评论讨论,也可以直接联系博主

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/514218.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

E百科 | 基于MEC的边缘AI服务

简介&#xff1a; 阿里云边缘计算团队付哲解读5G下热门场景&#xff1a;边缘AI。作者&#xff1a;阿里云付哲&#xff0c;计算机科学与技术专业博士后&#xff0c;在流量检测、资源调度领域有深入研究&#xff0c;其论文《Astraea: Deploy AI Services at the Edge in Elegant …

网速dns怎么调快_怎么设置dns?教你快速解决网速慢的问题

体内惊人荒之力很的洪&#xff0c;设置速解速慢设置速解速慢了一但其大批企业高成中中隐藏优质长的&#xff0c;复活&#xff0c;一旦。特斯提高金拉2的第度为的产大量的资7年能投入了三季&#xff0c;教决网仅生0辆产了&#xff0c;理想并不其效果却&#xff0c;交付2辆只有实…

“凡尔赛”式晒校园生活?移动云 9.9 风暴手把手教你!

快开学了卧虎藏龙的校园当然也少不了“凡尔赛大师”看看普通版和进阶版的凡尔赛学霸学神们如何用最低调的话炫最高调的耀LETS GO!考/试/篇假/期/篇生/活/费/篇看完学霸与学神的凡尔赛较量除了羡慕他们能够凡尔赛的资本更重要的是了解到移动云校园套餐原来如&#xff01;此&…

开源微服务运行时 Dapr 发布 1.0 版本

简介&#xff1a; Dapr 是 2019 年 10 月开源的分布式运行时。早在 Dapr 开源初期&#xff0c;阿里云就开始参与 Dapr 社区建设和代码开发&#xff0c;目前已有两位 Dapr 成员&#xff0c;是 Dapr 项目中除微软之外代码贡献最多的公司。作为 Dapr 项目的早期采用者&#xff0c;…

如何应用数据模型

简介&#xff1a; 数据模型对于常规的数据查询或填写数据提交&#xff0c;是否有使用场景或者价值&#xff1f;数据模型这条路走的是否有问题&#xff1f; 一 前言 Vmo 是我在 18 年发布的一个工具库&#xff0c;用于快速创建数据模型&#xff0c;当时我写了一篇文章《Vmo 前端…

一行代码,揭开 CPU 执行原理!

作者 | 轩辕之风O来源 | 编程宇宙技术计算机如何执行你的代码&#xff1f;知乎上有人提问&#xff1a;电脑怎样执行编程语言的&#xff1f;很多刚刚入坑的小白可能对此完全没有概念&#xff0c;或者模模糊糊知道个大概&#xff0c;我们写下的一行行代码&#xff0c;计算机到底是…

有赞 Flink 实时任务资源优化探索与实践

简介&#xff1a; 目前有赞实时计算平台对于 Flink 任务资源优化探索已经走出第一步。 随着 Flink K8s 化以及实时集群迁移完成&#xff0c;有赞越来越多的 Flink 实时任务运行在 K8s 集群上&#xff0c;Flink K8s 化提升了实时集群在大促时弹性扩缩容能力&#xff0c;更好的降…

mysql怎么看端口号_mysql端口号(怎么查看mysql的端口号)

mysql端口号(怎么查看mysql的端口号)2020-05-07 21:54:58共10个回答如何查看mysql的端口号1使用命令showglobalvariableslikeport;查看端口号2修改端口,编辑/etc/my.cnf文件,早期版本有可能是my.conf文件名,增加端口参数,并且设定端口,注意该端口未被使用,保存退出.总结:注意修…

Serverless 如何在阿里巴巴实现规模化落地?

简介&#xff1a; 2020 年&#xff0c;我们在 Serverless 底层基建上做了非常大的升级&#xff0c;比如计算升级到了第四代神龙架构&#xff0c;存储上升级到了盘古 2.0&#xff0c;网络上进入了百 G 洛神网络&#xff0c;整体升级之后性能提升两倍&#xff1b;BaaS 层面也进行…

php验证mysql内数据_MySQL中数据类型的验证_MySQL

CHARchar (M) M字符&#xff0c;长度是M*字符编码长度&#xff0c;M最大255。验证如下&#xff1a;mysql> create table t1(name char(256)) default charsetutf8;ERROR 1074 (42000): Column length too big for column name (max 255); use BLOB or TEXT insteadmysql>…

专访合一智芯杨桦:做AI芯片核心技术的底层设计师

核心IP设计是国产化芯片的关键&#xff0c;杨桦是这个细分领域的一名创业者。作为曾在威盛电子和ARM公司工作过的芯片领域的老将&#xff0c;他一直追寻的目标就是给“中国芯”最好的设计。 作者 | 王查娜 来源 | 中国高新网 深研AI芯片设计 杨桦本科毕业于北京航空航天大学…

Flink SQL 性能优化:multiple input 详解

简介&#xff1a; 在 Flink 1.12 中&#xff0c;针对目前 operator chaining 无法覆盖的场景&#xff0c;推出了 multiple input operator 与 source chaining 优化。该优化将消除 Flink 作业中大多数冗余 shuffle&#xff0c;进一步提高作业的执行效率。本文将以一个 SQL 作业…

mysql锁表更新_Mysql InnoDB 数据更新导致锁表

一、数据表结构CREATE TABLE jx_attach (attach_id int(11) NOT NULL AUTO_INCREMENT,feed_id int(11) DEFAULT NULL ,attach_name varchar(255) NOT NULL,cycore_file_id varchar(255) DEFAULT NULL ,attach_size bigint(20) NOT NULL DEFAULT 0,complete smallint(6) NOT NUL…

Java异步非阻塞编程的几种方式

简介&#xff1a; Java异步非阻塞编程的几种方式 一、 从一个同步的Http调用说起 一个很简单的业务逻辑&#xff0c;其他后端服务提供了一个接口&#xff0c;我们需要通过接口调用&#xff0c;获取到响应的数据。 逆地理接口&#xff1a;通过经纬度获取这个经纬度所在的省市区…

张一鸣 90 亿购得元宇宙入场券,谁将是头号玩家?

整理 | 禾木木 出品 | CSDN云计算&#xff08;ID&#xff1a;CSDNcloud&#xff09; VR 技术不断发展&#xff0c;虚拟网络世界“元宇宙”的概念也愈加火热&#xff0c;VR 已经不再局限于游戏与影视&#xff0c;它将作为降本增效的工具服务于更多行业领域。 8月29日&#xf…

前端开发:如何正确地跨端?

简介&#xff1a; 面对多种多样的跨端诉求&#xff0c;有哪些跨端方案&#xff1f;跨端的本质是什么&#xff1f;作为业务技术开发者&#xff0c;应该怎么做&#xff1f;本文分享阿里巴巴ICBU技术部在跨端开发上的一些思考&#xff0c;介绍了当前主流的跨端方案&#xff0c;以及…

mysql 创建表check如何使用_MySQL怎么使用check约束

在数据库中&#xff0c;CHECK 约束是指约束表中某一个或者某些列中可接受的数据值或者数据格式(用于限制列中的值的范围)。在一些情况下&#xff0c;我们需要字段在指定范围的输入&#xff0c;例如&#xff1a;性别只能输入 男或者女&#xff0c;余额只能大于0等条件&#xff0…

2020年,这个算法团队都干了啥?

简介&#xff1a; 什么是算法&#xff1f;什么是广告算法工程师&#xff1f;算法工程师又是如何定义的&#xff1f;今天作者将就算法、电商算法为主题和我们分享他的理解&#xff0c;同时还将和我们分享ICBU算法团队的整体工作和2020年的一些重要技术突破。 写在最前 我个人有…

Mendix将升级低代码软件开发平台,发布全新数字化生态系统、行业云

编辑 | 宋 慧 供稿 | Mendix 企业低代码应用开发全球领导者Mendix, a Siemens business在Mendix World 2021大会上宣布推出全新升级的数字化生态系统。Mendix World 2021大会也是全球规模最大的低代码创客与专家在线大会。借助Mendix平台的新功能&#xff0c;Mendix 创客社区的…

为了让你在“口袋奇兵”聊遍全球,Serverless 做了什么?

简介&#xff1a; 江娱互动是一家新兴的游戏企业&#xff0c;自 2018 年成立伊始&#xff0c;江娱互动就面向广阔的全球游戏市场&#xff0c;通过创造有趣的游戏体验&#xff0c;在竞争激烈的游戏市场占得一席之地。仅仅 2 年的时间&#xff0c;江娱互动就凭借 Topwar&#xff…