likes
comments
collection
share

不再踩坑!手把手教你将OpenCV部署到Android上

作者站长头像
站长
· 阅读数 7

课设作业需要用到图像处理相关的库,想着自己知识范围内的也只是移动端开发,不如直接写个app交上去,可问题来了——能在移动端运行的图像处理库必须得满足两个要求,高效(手机摄像头每一帧捕捉到的画面都得高速处理,要不然实际应用会卡顿),轻量(不能手机上塞个几G的大模型吧..... ),思来想去好像也就openCV和 TensorFlow Lite能用(后者还能换模型,感觉会好一点),本片即记录正常使用openCV库的过程

准备工作

OpenCV SDK

不再踩坑!手把手教你将OpenCV部署到Android上

  • 可以看到最新的版本是截止到2023-12-284.9.0,我们要使用的包即Android包,直接下载即可

Android Studio

  • 此教程写于2024.04,Android Studio版本为Android Studio Iguana | 2023.2.1

  • Android Studio 官网

  • 正常创建项目 不再踩坑!手把手教你将OpenCV部署到Android上 不再踩坑!手把手教你将OpenCV部署到Android上

  • 目前openCV的预览摄像头view还是CameraCamera2实现,还没用上Google最新的CameraX(希望快点更新,CameraX着实好用得多)

  • 注意在选择Build configuration language时要选择Groovy,因为待会集成ModuleOpenCV自己的build.gradle里还是用的Groovy,使用最新的Kotlin DSL会报错

正式开始

导入SDK

  • 打开Project Structure

不再踩坑!手把手教你将OpenCV部署到Android上

  • 选择导入Module

不再踩坑!手把手教你将OpenCV部署到Android上

  • 找到自己下好的OpenCV包并导入

不再踩坑!手把手教你将OpenCV部署到Android上

  • 更改导入后的Module名称,当然也可以不改,我觉得改了会好些(

不再踩坑!手把手教你将OpenCV部署到Android上

  • 直接Finish,可以看到导入完成

不再踩坑!手把手教你将OpenCV部署到Android上

  • 我们还需要把Module作为Depandency导入到我们自己的项目中,选择Module Depandecy

不再踩坑!手把手教你将OpenCV部署到Android上

  • 选中opencvOK

不再踩坑!手把手教你将OpenCV部署到Android上

  • 可以看到opencv被导入成功

不再踩坑!手把手教你将OpenCV部署到Android上

修改文件

  • 已经导入好的opencv模块是无法直接使用的,因为模块中build.gradle中好多配置都和我们自己项目的build.gradle不同,我们还得使二者同步

  • opencv下的build.gradle中已经有了注释来告诉你该如何进行相应的配置,我不是一个喜欢用旧库的人,为了保证我项目下的build.gradle配置同步,所有能拉高版本的地方我都做了修改

  • 先看一下我们默认生成的app/build.gradle

plugins {
    alias(libs.plugins.androidApplication)
    alias(libs.plugins.jetbrainsKotlinAndroid)
}

android {
    namespace 'com.ericmoin.opencv_demo'
    compileSdk 34

    defaultConfig {
        applicationId "com.ericmoin.opencv_demo"
        minSdk 24
        targetSdk 34
        versionCode 1
        versionName "1.0"

        testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
    }

    buildTypes {
        release {
            minifyEnabled false
            proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
        }
    }
    compileOptions {
        sourceCompatibility JavaVersion.VERSION_17
        targetCompatibility JavaVersion.VERSION_17
    }
    kotlinOptions {
        jvmTarget = '17'
    }
    buildFeatures {
        viewBinding true
    }
}

dependencies {
    implementation project(':opencv')
    implementation libs.androidx.core.ktx
    implementation libs.androidx.appcompat
    implementation libs.material
    implementation libs.androidx.activity
    implementation libs.androidx.constraintlayout
    testImplementation libs.junit
    androidTestImplementation libs.androidx.junit
    androidTestImplementation libs.androidx.espresso.core
}
  • 这里我们得到minSdk24,targetSdk34,这些都是待会在opencv/build.gradle要修改的内容

  • 我们需要知道kotlingradle的版本。每一个版本的Android Studio都会对项目结构下的gradle进行调整,这里目前最新的版本可以看到gradle的版本号已经被隐藏了,我们可以通过Ctrl+鼠标左键点击libs.plugins.jetbrainsKotlinAndroid直接跳转到它链接的文件,最后找到这个文件

不再踩坑!手把手教你将OpenCV部署到Android上

  • 在这里我们可以找到相应的版本号,接下来即修改opencv/build.gradle
apply plugin: 'com.android.library'
apply plugin: 'maven-publish'
apply plugin: 'kotlin-android'

def openCVersionName = "4.9.0"
def openCVersionCode = ((4 * 100 + 9) * 100 + 0) * 10 + 0

println "OpenCV: " +openCVersionName + " " + project.buildscript.sourceFile

android {
    namespace 'org.opencv'
//    compileSdkVersion 31
    compileSdkVersion 34
    defaultConfig {
        minSdkVersion 24
//        minSdkVersion 21
//        targetSdkVersion 31
        targetSdkVersion 34

        versionCode openCVersionCode
        versionName openCVersionName

        externalNativeBuild {
            cmake {
                arguments "-DANDROID_STL=c++_shared"
                targets "opencv_jni_shared"
            }
        }
    }

    compileOptions {
//        sourceCompatibility JavaVersion.VERSION_1_8
        sourceCompatibility JavaVersion.VERSION_17
        targetCompatibility JavaVersion.VERSION_17
//        targetCompatibility JavaVersion.VERSION_1_8
    }



    buildTypes {
        debug {
            packagingOptions {
                doNotStrip '**/*.so'  // controlled by OpenCV CMake scripts
            }
        }
        release {
            packagingOptions {
                doNotStrip '**/*.so'  // controlled by OpenCV CMake scripts
            }
            minifyEnabled false
            proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.txt'
        }
    }

    buildFeatures {
        aidl true
        prefabPublishing true
        buildConfig true
    }
    prefab {
        opencv_jni_shared {
            headers "native/jni/include"
        }
    }

    sourceSets {
        main {
            jniLibs.srcDirs = ['native/libs']
            java.srcDirs = ['java/src']
            aidl.srcDirs = ['java/src']
            res.srcDirs = ['java/res']
            manifest.srcFile 'java/AndroidManifest.xml'
        }
    }

    publishing {
        singleVariant('release') {
            withSourcesJar()
            withJavadocJar()
        }
    }

    externalNativeBuild {
        cmake {
            path (project.projectDir.toString() + '/libcxx_helper/CMakeLists.txt')
        }
    }
}

publishing {
    publications {
        release(MavenPublication) {
            groupId = 'org.opencv'
            artifactId = 'opencv'
            version = '4.9.0'

            afterEvaluate {
               from components.release
           }
        }
    }
    repositories {
        maven {
            name = 'myrepo'
            url = "${project.buildDir}/repo"
        }
    }
}

dependencies {

}
  • 其中注释的部分即为文件原来的内容,注意文件中修改的几处,Java版本随意,只要能跑通即可,我这里使用的是jdk17

  • 如果上述操作都没啥问题,可以直接先跑一遍运行,如果操作正确app是可以正常启动的

  • 如果遇到报错,请注意自己的gradle版本是否一致编译所用的jdk是否一致

  • OpenCV 4.9build.gradle已经移除了gradle版本号部分,如果是更前的版本,需要额外注意版本号不同,两个build.gradlesettings.gradle中的内容不同带来的影响

编写基本布局

  • 因为只是一个示例demo,我打算只在这里演示两个功能——OpenCV边缘检测图像灰度化

  • 这样我们的布局就非常简单了——DetectFragmentImageFragment,和刚开始就需要展示的MainFragment,相关的权限申请路由跳转也在这里完成

  • 先在app/build.gradle中导入需要的库

dependencies {
    implementation project(':opencv')
//    implementation "com.guolindev.permissionx:permissionx:1.7.1"
//    def nav_version = "2.7.7"
//    implementation "androidx.navigation:navigation-fragment-ktx:$nav_version"
//    implementation "androidx.navigation:navigation-ui-ktx:$nav_version"
//    implementation "androidx.navigation:navigation-dynamic-features-fragment:$nav_version"
//    androidTestImplementation "androidx.navigation:navigation-testing:$nav_version"
//    implementation "androidx.navigation:navigation-compose:$nav_version"

    implementation libs.permissionx
    implementation libs.androidx.navigation.fragment.ktx
    implementation libs.androidx.navigation.ui.ktx
    implementation libs.androidx.navigation.dynamic.features.fragment
    androidTestImplementation libs.androidx.navigation.testing
    implementation libs.androidx.navigation.compose

    implementation libs.androidx.core.ktx
    implementation libs.androidx.appcompat
    implementation libs.material
    implementation libs.androidx.activity
    implementation libs.androidx.constraintlayout
    testImplementation libs.junit
    androidTestImplementation libs.androidx.junit
    androidTestImplementation libs.androidx.espresso.core
}
  • 上面注释的内容其实是旧的写法,在Android Studio中会被纠正成新的写法,我给出示例只是为了考虑旧的Android Studio中的导入形式

  • 依次创建MainFragment.kt,DetectFragment.kt,ImageFragment.kt

  • 编写activity_main.xml

<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
    xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:id="@+id/main"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context=".MainActivity">
    <androidx.fragment.app.FragmentContainerView
        android:id="@+id/container"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:name="androidx.navigation.fragment.NavHostFragment"

        app:defaultNavHost="true"
        app:navGraph="@navigation/main_navigation"
        />
</androidx.constraintlayout.widget.ConstraintLayout>
  • 编写fragment_main.xml
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout
    xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:id="@+id/mainFragment"
    android:layout_width="match_parent"
    android:layout_height="match_parent">
    <Button
        android:id="@+id/cameraButton"
        android:text="跳转检测"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        />
    <Button
        android:id="@+id/imageButton"
        android:text="跳转图像"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        />
</LinearLayout>
  • 编写fragment_detect.xml
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
    xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent">
    <org.opencv.android.JavaCamera2View
        android:id="@+id/cameraView"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        />
</androidx.constraintlayout.widget.ConstraintLayout>
  • 编写fragment_image
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
    xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    android:layout_width="match_parent"
    android:layout_height="match_parent">
    <ImageView
        android:id="@+id/image"
        android:background="@drawable/example"
        android:layout_width="250dp"
        android:layout_height="250dp"
        android:layout_marginTop="100dp"
        app:layout_constraintTop_toTopOf="parent"
        app:layout_constraintStart_toStartOf="parent"
        app:layout_constraintEnd_toEndOf="parent"
        />
    <Button
        android:id="@+id/button"
        android:text="改变"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        app:layout_constraintTop_toBottomOf="@id/image"
        app:layout_constraintStart_toStartOf="parent"
        app:layout_constraintEnd_toEndOf="parent"
    />

</androidx.constraintlayout.widget.ConstraintLayout>
  • 这里的android:background="@drawable/example是我已经放在drawble里的示例图片,这里需要你换成自己的图片

导航视图

  • src/res下创建文件夹navigation,新建一个navigation文件( 叫什么名字不影响,这里我命名为main_navigation.xml)

不再踩坑!手把手教你将OpenCV部署到Android上

  • 打开这个文件,点击add destination

不再踩坑!手把手教你将OpenCV部署到Android上

  • 我们希望的是MainFragment可以跳转到DetectFragmentImageFragment,故依次新建并连线,最后效果如下(这部分不会用需要百度,其实就是新建之后直接连线就好)

不再踩坑!手把手教你将OpenCV部署到Android上

  • 不同Android Studio版本MainActivity的代码实现不一样,我的写法如下所示
package com.ericmoin.opencv_demo

import android.os.Bundle
import android.widget.Toast
import androidx.activity.enableEdgeToEdge
import androidx.appcompat.app.AppCompatActivity
import androidx.core.view.ViewCompat
import androidx.core.view.WindowInsetsCompat
import com.ericmoin.opencv_demo.databinding.ActivityMainBinding
import org.opencv.android.OpenCVLoader
import org.opencv.imgproc.Imgproc

class MainActivity : AppCompatActivity() {
    lateinit var binding: ActivityMainBinding
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        binding = ActivityMainBinding.inflate(layoutInflater)
        setContentView(binding.root)
        if(OpenCVLoader.initLocal()){
            Toast.makeText(this,"opencv 初始化成功",Toast.LENGTH_SHORT).show()
        }
    }
}
  • 此时点击运行

不再踩坑!手把手教你将OpenCV部署到Android上

  • 一切正常
  • 编写MainFragment
package com.ericmoin.opencv_demo

import androidx.fragment.app.viewModels
import android.os.Bundle
import androidx.fragment.app.Fragment
import android.view.LayoutInflater
import android.view.View
import android.view.ViewGroup
import androidx.navigation.fragment.findNavController
import com.ericmoin.opencv_demo.databinding.FragmentMainBinding

class MainFragment : Fragment() {

    companion object {
        fun newInstance() = MainFragment()
    }
    lateinit var binding: FragmentMainBinding
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
    }

    override fun onCreateView(
        inflater: LayoutInflater, container: ViewGroup?,
        savedInstanceState: Bundle?
    ): View {
        binding = FragmentMainBinding.inflate(inflater,container,false)
        return binding.root;
    }

    override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
        super.onViewCreated(view, savedInstanceState)
        binding.detectButton.setOnClickListener {
            findNavController().navigate(R.id.action_mainFragment_to_detectFragment)
        }
        binding.imageButton.setOnClickListener {
            findNavController().navigate(R.id.action_mainFragment_to_imageFragment)
        }
    }
}

图像灰度化

  • openCV处理图像的流程基本遵循三步:
    • Bitmap转化为Mat
    • Mat进行操作
    • Mat转化为Bitmap,展示
  • 图像灰度化的核心函数只有Imgproc.cvtColor一个,故不作过多解释
package com.ericmoin.opencv_demo

import android.graphics.Bitmap
import androidx.fragment.app.viewModels
import android.os.Bundle
import androidx.fragment.app.Fragment
import android.view.LayoutInflater
import android.view.View
import android.view.ViewGroup
import android.widget.Button
import android.widget.ImageView
import androidx.core.view.drawToBitmap
import com.ericmoin.opencv_demo.databinding.FragmentImageBinding
import org.opencv.android.Utils
import org.opencv.core.CvType
import org.opencv.core.Mat
import org.opencv.imgproc.Imgproc

class ImageFragment : Fragment() {

    companion object {
        fun newInstance() = ImageFragment()
    }
    lateinit var binding: FragmentImageBinding
    override fun onCreateView(
        inflater: LayoutInflater, container: ViewGroup?,
        savedInstanceState: Bundle?
    ): View {
        binding = FragmentImageBinding.inflate(inflater,container,false)
        return binding.root
    }

    override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
        super.onViewCreated(view, savedInstanceState)
        initButton()
    }
    private fun initButton() {
        binding.button.setOnClickListener {
            changeImage()
        }
    }

    private fun changeImage() {
        // 获得ImageView所展示的Bitmap
        val bitmap = binding.image.drawToBitmap().copy(Bitmap.Config.ARGB_8888,false)
        // 创建一个初始矩阵
        val src = Mat()
        // 把获得的Bitmap转换为矩阵
        Utils.bitmapToMat(bitmap,src)
        // 图像灰度化
        Imgproc.cvtColor(src,src, Imgproc.COLOR_BGR2GRAY)
        // 把变换后的矩阵转化为bitmap
        Utils.matToBitmap(src,bitmap)
        // 展现图片
        binding.image.setImageBitmap(bitmap)
    }

}
  • 改变前

不再踩坑!手把手教你将OpenCV部署到Android上

  • 改变后

不再踩坑!手把手教你将OpenCV部署到Android上

边缘检测

  • 我希望实现一个实时的边缘检测效果,这就意味着我们需要摄像头的相关权限

  • AndroidMenifest.xml中声明

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools">
    <uses-feature android:name="android.hardware.camera.any" />
    <uses-feature android:name="android.hardware.autofocus" />

    <uses-permission android:name="android.permission.CAMERA" />
    <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
    <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
    <application
        android:allowBackup="true"
        android:dataExtractionRules="@xml/data_extraction_rules"
        android:fullBackupContent="@xml/backup_rules"
        android:icon="@mipmap/ic_launcher"
        android:label="@string/app_name"
        android:roundIcon="@mipmap/ic_launcher_round"
        android:supportsRtl="true"
        android:theme="@style/Theme.Opencv_demo"
        tools:targetApi="31">
        <activity
            android:name=".MainActivity"
            android:exported="true">
            <intent-filter>
                <action android:name="android.intent.action.MAIN" />

                <category android:name="android.intent.category.LAUNCHER" />
            </intent-filter>
        </activity>
    </application>

</manifest>
  • 权限的申请部分我使用了郭霖大神的permissionX,这个库我在前面gradle部分已经导入了,这里直接使用即可

  • 回到MainActivity

package com.ericmoin.opencv_demo

import android.Manifest
import android.os.Bundle
import android.widget.Toast
import androidx.appcompat.app.AppCompatActivity
import androidx.core.app.ActivityCompat
import com.ericmoin.opencv_demo.databinding.ActivityMainBinding
import com.permissionx.guolindev.PermissionX
import org.opencv.android.OpenCVLoader

class MainActivity : AppCompatActivity() {
    companion object{
        private const val REQUEST_CODE_PERMISSIONS = 10
        private val REQUIRED_PERMISSIONS = arrayOf(
            Manifest.permission.CAMERA,
            Manifest.permission.READ_EXTERNAL_STORAGE,
            Manifest.permission.WRITE_EXTERNAL_STORAGE
        )
    }
    lateinit var binding: ActivityMainBinding
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        binding = ActivityMainBinding.inflate(layoutInflater)
        setContentView(binding.root)
        if(OpenCVLoader.initLocal()){
            Toast.makeText(this,"opencv 初始化成功",Toast.LENGTH_SHORT).show()
        }
        initPermission()
    }
    private fun initPermission() {
        PermissionX.init(this)
            .permissions(
                REQUIRED_PERMISSIONS.toList()
            )
            .request { allGranted, _, _ ->
                if ( allGranted ){
                    Toast.makeText(this,"权限申请成功", Toast.LENGTH_SHORT).show()
                }
                else{
                    ActivityCompat.requestPermissions(
                        this,
                        REQUIRED_PERMISSIONS,
                        REQUEST_CODE_PERMISSIONS
                    )
                }
            }
    }
}
  • 编写DetectFragment
package com.ericmoin.opencv_demo

import androidx.fragment.app.viewModels
import android.os.Bundle
import android.util.Log
import androidx.fragment.app.Fragment
import android.view.LayoutInflater
import android.view.View
import android.view.ViewGroup
import com.ericmoin.opencv_demo.databinding.FragmentDetectBinding
import org.opencv.android.CameraBridgeViewBase
import org.opencv.android.CameraBridgeViewBase.CvCameraViewListener2
import org.opencv.core.Mat
import org.opencv.core.MatOfPoint
import org.opencv.core.Point
import org.opencv.core.Scalar
import org.opencv.core.Size
import org.opencv.imgproc.Imgproc

class DetectFragment : Fragment() {

    companion object {
        fun newInstance() = DetectFragment()
    }
    lateinit var binding: FragmentDetectBinding
    override fun onCreateView(
        inflater: LayoutInflater, container: ViewGroup?,
        savedInstanceState: Bundle?
    ): View {
        binding = FragmentDetectBinding.inflate(inflater,container,false)
        return binding.root
    }
    val cameraViewListener2 = object : CvCameraViewListener2{
        override fun onCameraViewStarted(width: Int, height: Int) {

        }

        override fun onCameraViewStopped() {

        }

        override fun onCameraFrame(inputFrame: CameraBridgeViewBase.CvCameraViewFrame): Mat? {
            return drawBorder(inputFrame.rgba())
        }
    }
    override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
        super.onViewCreated(view, savedInstanceState)
        binding.cameraView.visibility = CameraBridgeViewBase.VISIBLE
        binding.cameraView.setCvCameraViewListener(cameraViewListener2)
        binding.cameraView.setCameraPermissionGranted()
    }

    override fun onResume() {
        super.onResume()
        binding.cameraView.enableView()
    }

    override fun onDestroy() {
        super.onDestroy()
        binding.cameraView.disableView()
    }
    private fun drawBorder(mat:Mat): Mat? {
        val result = Mat()
        // 把原矩阵灰度化,便于进行边缘检测
        Imgproc.cvtColor(mat,result, Imgproc.COLOR_BGR2GRAY)
        // 高斯滤波去噪声
        Imgproc.GaussianBlur(result, result, Size(3.0, 3.0), 0.0)
        // canny算法得到边缘
        Imgproc.Canny(result, result, 0.0, 256.0)
        val contours: List<MatOfPoint> = ArrayList()
        val hierarchy = Mat()
        // 寻找边缘
        Imgproc.findContours(
            result,
            contours,
            hierarchy,
            Imgproc.RETR_EXTERNAL,
            Imgproc.CHAIN_APPROX_SIMPLE
        )
        if (contours.isEmpty()) {
            return null
        }
        val resultMat = mat.clone()
        // 绘制边缘
        for( index in contours.indices ){
            // 边缘的颜色指定
            val scalar = Scalar((0..255).random().toDouble(),(0..255).random().toDouble(),(0..255).random().toDouble())
            Imgproc.drawContours(resultMat,contours,index,scalar,1,8,hierarchy,0, Point())
        }
        return resultMat
    }
}
  • 可以发现openCVAndroid下的apipython下基本保持一致,也就是说网上通用的python教程理论上都可以在Android上复现

  • 运行看看结果

不再踩坑!手把手教你将OpenCV部署到Android上

private final Matrix mMatrix = new Matrix();

private void updateMatrix() {
    float mw = this.getWidth();
    float mh = this.getHeight();

    float hw = this.getWidth() / 2.0f;
    float hh = this.getHeight() / 2.0f;

    float cw  = (float)Resources.getSystem().getDisplayMetrics().widthPixels; //Make sure to import Resources package
    float ch  = (float)Resources.getSystem().getDisplayMetrics().heightPixels;

    float scale = cw / (float)mh;
    float scale2 = ch / (float)mw;
    if(scale2 > scale){
        scale = scale2;
    }

    boolean isFrontCamera = mCameraIndex == CAMERA_ID_FRONT;

    mMatrix.reset();
    if (isFrontCamera) {
        mMatrix.preScale(-1, 1, hw, hh); //MH - this will mirror the camera
    }
    mMatrix.preTranslate(hw, hh);
    if (isFrontCamera){
        mMatrix.preRotate(270);
    } else {
        mMatrix.preRotate(90);
    }
    mMatrix.preTranslate(-hw, -hh);
    mMatrix.preScale(scale,scale,hw,hh);
}

@Override
public void layout(int l, int t, int r, int b) {
    super.layout(l, t, r, b);
    updateMatrix();
}

@Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
    super.onMeasure(widthMeasureSpec, heightMeasureSpec);
    updateMatrix();
}

/**
* This method shall be called by the subclasses when they have valid
* object and want it to be delivered to external client (via callback) and
* then displayed on the screen.
* @param frame - the current frame to be delivered
*/
protected void deliverAndDrawFrame(CvCameraViewFrame frame) { //replaces existing deliverAndDrawFrame
    Mat modified;

    if (mListener != null) {
        modified = mListener.onCameraFrame(frame);
    } else {
        modified = frame.rgba();
    }

    boolean bmpValid = true;
    if (modified != null) {
        try {
            Utils.matToBitmap(modified, mCacheBitmap);
        } catch(Exception e) {
            Log.e(TAG, "Mat type: " + modified);
            Log.e(TAG, "Bitmap type: " + mCacheBitmap.getWidth() + "*" + mCacheBitmap.getHeight());
            Log.e(TAG, "Utils.matToBitmap() throws an exception: " + e.getMessage());
            bmpValid = false;
        }
    }

    if (bmpValid && mCacheBitmap != null) {
        Canvas canvas = getHolder().lockCanvas();
        if (canvas != null) {
            canvas.drawColor(0, android.graphics.PorterDuff.Mode.CLEAR);
            int saveCount = canvas.save();
            canvas.setMatrix(mMatrix);

            if (mScale != 0) {
                canvas.drawBitmap(mCacheBitmap, new Rect(0,0,mCacheBitmap.getWidth(), mCacheBitmap.getHeight()),
                        new Rect((int)((canvas.getWidth() - mScale*mCacheBitmap.getWidth()) / 2),
                                (int)((canvas.getHeight() - mScale*mCacheBitmap.getHeight()) / 2),
                                (int)((canvas.getWidth() - mScale*mCacheBitmap.getWidth()) / 2 + mScale*mCacheBitmap.getWidth()),
                                (int)((canvas.getHeight() - mScale*mCacheBitmap.getHeight()) / 2 + mScale*mCacheBitmap.getHeight())), null);
            } else {
                canvas.drawBitmap(mCacheBitmap, new Rect(0,0,mCacheBitmap.getWidth(), mCacheBitmap.getHeight()),
                        new Rect((canvas.getWidth() - mCacheBitmap.getWidth()) / 2,
                                (canvas.getHeight() - mCacheBitmap.getHeight()) / 2,
                                (canvas.getWidth() - mCacheBitmap.getWidth()) / 2 + mCacheBitmap.getWidth(),
                                (canvas.getHeight() - mCacheBitmap.getHeight()) / 2 + mCacheBitmap.getHeight()), null);
            }

            //Restore canvas after draw bitmap
            canvas.restoreToCount(saveCount);

            if (mFpsMeter != null) {
                mFpsMeter.measure();
                mFpsMeter.draw(canvas, 20, 30);
            }
            getHolder().unlockCanvasAndPost(canvas);
        }
    }
}

总结

  • 整个安装过程我在网上找了很多的教程,踩过很多的坑才走到了这里,如果您看到了这里,希望您能留个点赞或收藏。感谢您的观看。