Saturday, March 31, 2012

Using camera API and getting raw image frame on N9 (Meego)

Past few weeks I was working on my pet project. I thought to add one feature in this project and it required me to use camera and access its individual raw frame.

Accessing camera on harmattan platform is supported by using QCamera API,  camera API is quite easy to use and I checked camera example application which works on n900.

After going through example application, I decided to use it in my application.

Creating camera and capturing  image or video using QCamera is quite simple and straight forward. I did  not faced any problem with it. But remember you need to request access from aegis framework for using camera, you can use following request for this purpose.
    
    <aegis>
        <request>
            <credential name="GRP::video" />
            <credential name="GRP::pulse-access" />
            <for path="absolute path to application" />
        </request>
    </aegis>


But when I decided to access raw image frame from camera, it did not proved so easy. N900 camera example does not work on N9 and need some changes.

I will try to list those changes and reason here.

To access individual camera frame from camera I decided to use MyVideoSurface class, you can find original source here.

But as I run the program on device, I noticed many camera related error in console and did not capture any valid camera image.

Error goes not like this.
CameraBin error: "Could not negotiate format" 
Reason for this is QCamera on N9, return image in UVVY format, so we need to add this image(QVideoFrame::Format_UYVY) support in MyVideoSurface class. But if you just add this support and dose not implement code to handle this format, you will face follwing error.
Failed to start video surface / CameraBin error: “Internal data flow error.” 
Reason it you can not use QVideoFrame offered in present method call, to create QImage directly. You need to convert image from UYVY to RGB and then use those RGB data to create QImage.

QGraphicsVideoItem class in n900, has a fast implementation of this conversion. You can find its implementation here.

Now you should have a valid QImage that you can use for further processing. Here is my code for MyVideoSurface class after making above changes. This class is tested on N950 device.

MyVideoSurface::MyVideoSurface( QObject* parent)
    : QAbstractVideoSurface(parent)
{
}

bool MyVideoSurface::start(const QVideoSurfaceFormat &format)
{
    mVideoFormat = format;
    //start only if format is UYVY, i dont handle other format now
    if( format.pixelFormat() == QVideoFrame::Format_UYVY ){
        QAbstractVideoSurface::start(format);
        return true;
    } else {
        return false;
    }
}

bool MyVideoSurface::present(const QVideoFrame &frame)
{
    mFrame = frame;

    if (surfaceFormat().pixelFormat() != mFrame.pixelFormat() ||
            surfaceFormat().frameSize() != mFrame.size()) {
        qDebug() << "stop()";
        stop();
        return false;
    } else {
        //this is necessary to get valid data from frame
        mFrame.map(QAbstractVideoBuffer::ReadOnly);

#ifdef  __ARM_NEON__

        QImage lastImage( mFrame.size(), QImage::Format_RGB16);
        const uchar *src = mFrame.bits();
        uchar *dst = lastImage.bits();
        const int srcLineStep = mFrame.bytesPerLine();
        const int dstLineStep = lastImage.bytesPerLine();
        const int h = mFrame.height();
        const int w = mFrame.width();

        for (int y=0; y < h; y++) {
            //this function you can find in qgraphicsvideoitem_maemo5.cpp,
            //link is mentioned above
            uyvy422_to_rgb16_line_neon(dst, src, w);
            src += srcLineStep;
            dst += dstLineStep;
        }

        mLastFrame = QPixmap::fromImage(lastImage);
        //emit signal, other can handle it and do necessary processing
        emit frameUpdated(mLastFrame);

#endif
        mFrame.unmap();

        return true;
    }
}

QList MyVideoSurface::supportedPixelFormats(
            QAbstractVideoBuffer::HandleType handleType) const
{
    if (handleType == QAbstractVideoBuffer::NoHandle) {
        //add support for UYVY format
        return QList<QVideoFrame::PixelFormat>() <<  QVideoFrame::Format_UYVY;
    } else {
        return QList<QVideoFrame::PixelFormat>();
    }
}

And following is MyCamera class, that use above MyVideoSurface class to display individual frame. I derived this class from QDeclarativeItem, so I can use it in QML as well.

MyCamera::MyCamera( QDeclarativeItem * parent ) :
    QDeclarativeItem(parent),mCamera(0)
{
    startCapture();
    mPixmap = new QGraphicsPixmapItem(this);
}

MyCamera::~MyCamera(){
    stopCapture();
}

void MyCamera::stopCapture(){
    if( mCamera )
        mCamera->stop();
}

void MyCamera::startCapture()
{
    mCamera = new QCamera(this);
    //set Still image mode for image capture or Video for capturing video
    //mCamera->setCaptureMode(QCamera::CaptureStillImage);
    mCamera->setCaptureMode(QCamera::CaptureVideo);

    //set my surface, to get individual frame from camera
    mSurface = new MyVideoSurface();
    mCamera->setViewfinder(mSurface );

    connect(mSurface,SIGNAL(frameUpdated(QPixmap)),this,SLOT(frameUpdated(QPixmap)));

    //set up video capture setting
    QVideoEncoderSettings videoSetting;
    //videoSetting.setQuality((QtMultimediaKit::EncodingQuality)0); //low
    videoSetting.setResolution(QSize(848, 480));

    // Media recoder to capture video,use record () to capture video
    QMediaRecorder* videoRecorder = new QMediaRecorder(mCamera);
    videoRecorder->setEncodingSettings(videoRecorder->audioSettings(),videoSetting);

    //  set up image capture setting
    //QImageEncoderSettings imageSetting;
    //imageSetting.setQuality((QtMultimediaKit::EncodingQuality)0); //low
    //imageSetting.setResolution(QSize(320, 240));

    // Image capture to capture Image,use capture() to capture image
    //m_stillImageCapture = new QCameraImageCapture(mCamera,this);
    //m_stillImageCapture->setEncodingSettings(imageSetting);

    // Start camera
    if (mCamera->state() == QCamera::ActiveState) {
        mCamera->stop();
    }
    mCamera->start();
}

void MyCamera::frameUpdated(const QPixmap& pixmap) {
    mPixmap->setPixmap(pixmap);
}

No comments:

Post a Comment