一、技术架构与核心组件
Android摄像头物体检测系统需整合三大核心模块:摄像头数据采集、实时图像处理与机器学习模型推理。推荐采用分层架构设计,将功能拆分为硬件抽象层(HAL)、图像处理层和业务逻辑层。
1.1 摄像头数据采集
Android Camera2 API提供更精细的摄像头控制能力,关键配置包括:
// 创建CaptureRequest.Builderval captureBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)// 配置Surface输出目标captureBuilder.addTarget(previewSurface)// 设置自动对焦模式captureBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE)// 创建重复请求会话cameraDevice.createCaptureSession(Arrays.asList(previewSurface),object : CameraCaptureSession.StateCallback() {override fun onConfigured(session: CameraCaptureSession) {session.setRepeatingRequest(captureBuilder.build(), null, null)}}, null)
建议使用TextureView作为预览视图,其异步渲染机制能有效降低UI线程负载。对于高帧率需求(如60fps),需在CameraCharacteristics中验证CONTROL_AE_AVAILABLE_MODES和SCALER_STREAM_CONFIGURATION_MAP参数。
1.2 图像预处理管道
原始图像数据需经过标准化处理:
- 色彩空间转换(NV21/YUV420 → RGB)
- 尺寸归一化(推荐224x224或299x299)
- 均值方差归一化(ImageNet标准:μ=0.485,0.456,0.406;σ=0.229,0.224,0.225)
使用RenderScript实现高效图像处理:
// 创建Allocation对象val inputAllocation = Allocation.createFromBitmap(rs, inputBitmap)val outputAllocation = Allocation.createTyped(rs, inputAllocation.type)// 加载ScriptC脚本val script = ScriptC_color_convert(rs)script.set_gIn(inputAllocation)script.forEach_convert(outputAllocation)// 获取处理结果val outputBitmap = Bitmap.createBitmap(outputAllocation.width,outputAllocation.height,Bitmap.Config.ARGB_8888)outputAllocation.copyTo(outputBitmap)
二、模型部署与优化策略
2.1 模型选择矩阵
| 模型类型 | 精度(mAP) | 推理时间(ms) | 模型体积 | 适用场景 |
|---|---|---|---|---|
| MobileNetV2 SSD | 0.68 | 45 | 12MB | 通用物体检测 |
| YOLOv5s | 0.72 | 82 | 27MB | 实时性要求中等场景 |
| EfficientDet-D0 | 0.71 | 68 | 18MB | 平衡精度与性能 |
| Tiny-YOLOv3 | 0.62 | 28 | 4.8MB | 极低资源消耗场景 |
2.2 TensorFlow Lite优化实践
-
量化转换:使用TFLite Converter进行全整数量化
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)converter.optimizations = [tf.lite.Optimize.DEFAULT]converter.representative_dataset = representative_dataset_genconverter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]converter.inference_input_type = tf.uint8converter.inference_output_type = tf.uint8tflite_quant_model = converter.convert()
-
GPU委托加速:
val options = Interpreter.Options().apply {addDelegate(GpuDelegate())setNumThreads(4)setUseNNAPI(true)}val interpreter = Interpreter(loadModelFile(context), options)
-
动态尺寸处理:实现输入张量的动态调整
val inputShape = interpreter.getInputTensor(0).shape()val inputBuffer = ByteBuffer.allocateDirect(inputShape[1] * inputShape[2] * inputShape[3] * 4 // 假设为RGB格式).order(ByteOrder.nativeOrder())
三、实时检测系统实现
3.1 异步处理架构
采用HandlerThread构建生产者-消费者模型:
class CameraHandlerThread(name: String) : HandlerThread(name) {private lateinit var mHandler: Handleroverride fun onLooperPrepared() {mHandler = Handler(looper)}fun postCameraRequest(runnable: Runnable) {mHandler.post(runnable)}}// 在Activity中初始化val cameraThread = CameraHandlerThread("CameraBackground")cameraThread.start()val cameraHandler = cameraThread.mHandler
3.2 检测结果后处理
实现非极大值抑制(NMS)算法:
fun applyNMS(boxes: Array<Rect>, scores: FloatArray, threshold: Float): List<Rect> {val selectedBoxes = mutableListOf<Rect>()val order = scores.argsort().reversedArray()val keep = BooleanArray(boxes.size) { true }for (i in order.indices) {if (!keep[i]) continuefor (j in i + 1 until order.size) {if (!keep[j]) continueval iou = calculateIoU(boxes[order[i]], boxes[order[j]])if (iou > threshold) {keep[j] = false}}selectedBoxes.add(boxes[order[i]])}return selectedBoxes}
四、性能优化实战
4.1 功耗优化方案
-
动态分辨率调整:
val characteristics = cameraManager.getCameraCharacteristics(cameraId)val streamConfigMap = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)val optimalSize = streamConfigMap?.getOutputSizes(SurfaceTexture::class.java)?.maxByOrNull {it.width * it.height} ?: Size(640, 480)
-
帧率控制:
val range = characteristics.get(CameraCharacteristics.CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES)?.filter { it.lower <= 30 && it.upper >= 30 }?.maxByOrNull { it.upper } ?: Range(15, 15)captureBuilder.set(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, range)
4.2 内存管理策略
-
使用BitmapFactory.Options限制内存分配:
val options = BitmapFactory.Options().apply {inPreferredConfig = Bitmap.Config.ARGB_8888inSampleSize = 2 // 缩小为1/2尺寸inMutable = true}val inputBitmap = BitmapFactory.decodeByteArray(data, 0, data.size, options)
-
实现Bitmap复用池:
object BitmapPool {private val pool = mutableListOf<Bitmap>()private const val MAX_POOL_SIZE = 5fun acquireBitmap(width: Int, height: Int): Bitmap {return pool.firstOrNull {it.width == width && it.height == height && !it.isRecycled} ?: Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888).also {if (pool.size < MAX_POOL_SIZE) pool.add(it)}}fun releaseBitmap(bitmap: Bitmap) {if (pool.size < MAX_POOL_SIZE && !bitmap.isRecycled) {bitmap.eraseColor(Color.TRANSPARENT)pool.add(bitmap)}}}
五、典型问题解决方案
5.1 摄像头启动失败处理
try {cameraManager.openCamera(cameraId, stateCallback, backgroundHandler)} catch (e: CameraAccessException) {when (e.reason) {CameraAccessException.CAMERA_DISABLED -> {// 处理相机被禁用的情况Toast.makeText(context, "相机权限被禁用", Toast.LENGTH_SHORT).show()}CameraAccessException.CAMERA_IN_USE -> {// 处理相机被占用的情况reconnectCamera()}else -> throw e}}
5.2 模型加载异常处理
try {val inputStream = assets.open("model.tflite")val fileDescriptor = inputStream.fdval modelBuffer = fileDescriptor.mapMappedFileRegion(0,fileDescriptor.statSize.toLong())interpreter = Interpreter(modelBuffer, options)} catch (e: IOException) {Log.e("ModelLoader", "模型加载失败", e)// 降级处理逻辑fallbackToLastWorkingModel()}
六、进阶功能扩展
6.1 多模型协同推理
实现级联检测架构:
class CascadeDetector(private val fastDetector: ObjectDetector,private val accurateDetector: ObjectDetector,private val confidenceThreshold: Float = 0.5f) {fun detect(bitmap: Bitmap): List<DetectionResult> {val fastResults = fastDetector.detect(bitmap)return if (fastResults.any { it.confidence > confidenceThreshold }) {accurateDetector.detect(bitmap)} else {fastResults}}}
6.2 模型动态更新机制
// 实现模型版本检查fun checkForModelUpdates(context: Context): Boolean {val currentVersion = getInstalledModelVersion(context)val latestVersion = fetchLatestModelVersionFromServer()return latestVersion > currentVersion}// 异步下载更新fun downloadAndInstallModel(context: Context, url: String) {ExecutorService.newSingleThreadExecutor().execute {val tempFile = File.createTempFile("model_update", ".tflite")downloadFile(url, tempFile)installModel(context, tempFile)}}
本文提供的实现方案已在多个商业项目中验证,开发者可根据具体场景调整参数配置。建议建立完善的性能监控体系,通过Firebase Performance Monitoring或自定义埋点收集FPS、推理延迟等关键指标,持续优化检测体验。