代码之家  ›  专栏  ›  技术社区  ›  Andy Jazz

是否可以更改CVPixelBuffer中捕获的AR图像的分辨率?

  •  1
  • Andy Jazz  · 技术社区  · 6 年前

    let pixelBuffer: CVPixelBuffer? = (sceneView.session.currentFrame?.capturedImage)
    

    ARKit可以捕获YCbCr格式的像素缓冲区。要在iPhone显示器上正确呈现这些图像,您需要访问 luma chroma 像素缓冲区的平面,并使用float4x4将全范围YCbCr值转换为sRGB ycbcrToRGBTransform

    但是我想知道我是否可以改变CVPixelBuffer中捕获的AR图像的分辨率 ?

    怎么做?我需要一个尽可能低的处理。

    1 回复  |  直到 5 年前
        1
  •  1
  •   Sohil R. Memon    6 年前

    是的,你能做到。这是怎么回事!

    /**
     Resizes a CVPixelBuffer to a new width and height.
     */
    func resizePixelBuffer(_ pixelBuffer: CVPixelBuffer,
                           width: Int, height: Int) -> CVPixelBuffer? {
        return resizePixelBuffer(pixelBuffer, cropX: 0, cropY: 0,
                                 cropWidth: CVPixelBufferGetWidth(pixelBuffer),
                                 cropHeight: CVPixelBufferGetHeight(pixelBuffer),
                                 scaleWidth: width, scaleHeight: height)
    }
    
    func resizePixelBuffer(_ srcPixelBuffer: CVPixelBuffer,
                           cropX: Int,
                           cropY: Int,
                           cropWidth: Int,
                           cropHeight: Int,
                           scaleWidth: Int,
                           scaleHeight: Int) -> CVPixelBuffer? {
    
        CVPixelBufferLockBaseAddress(srcPixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
        guard let srcData = CVPixelBufferGetBaseAddress(srcPixelBuffer) else {
            print("Error: could not get pixel buffer base address")
            return nil
        }
        let srcBytesPerRow = CVPixelBufferGetBytesPerRow(srcPixelBuffer)
        let offset = cropY*srcBytesPerRow + cropX*4
        var srcBuffer = vImage_Buffer(data: srcData.advanced(by: offset),
                                      height: vImagePixelCount(cropHeight),
                                      width: vImagePixelCount(cropWidth),
                                      rowBytes: srcBytesPerRow)
    
        let destBytesPerRow = scaleWidth*4
        guard let destData = malloc(scaleHeight*destBytesPerRow) else {
            print("Error: out of memory")
            return nil
        }
        var destBuffer = vImage_Buffer(data: destData,
                                       height: vImagePixelCount(scaleHeight),
                                       width: vImagePixelCount(scaleWidth),
                                       rowBytes: destBytesPerRow)
    
        let error = vImageScale_ARGB8888(&srcBuffer, &destBuffer, nil, vImage_Flags(0))
        CVPixelBufferUnlockBaseAddress(srcPixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
        if error != kvImageNoError {
            print("Error:", error)
            free(destData)
            return nil
        }
    
        let releaseCallback: CVPixelBufferReleaseBytesCallback = { _, ptr in
            if let ptr = ptr {
                free(UnsafeMutableRawPointer(mutating: ptr))
            }
        }
    
        let pixelFormat = CVPixelBufferGetPixelFormatType(srcPixelBuffer)
        var dstPixelBuffer: CVPixelBuffer?
        let status = CVPixelBufferCreateWithBytes(nil, scaleWidth, scaleHeight,
                                                  pixelFormat, destData,
                                                  destBytesPerRow, releaseCallback,
                                                  nil, nil, &dstPixelBuffer)
        if status != kCVReturnSuccess {
            print("Error: could not create new pixel buffer")
            free(destData)
            return nil
        }
        return dstPixelBuffer
    }
    

    用法:

    if let pixelBuffer = sceneView.session.currentFrame?.capturedImage, let resizedBuffer = resizePixelBuffer(pixelBuffer, width: 320, height: 480) {
        //Core Model Processing
    }
    

    参考文献: https://github.com/hollance/CoreMLHelpers/tree/master/CoreMLHelpers