Augmented Reality using openCV-python


This project tries to implement augmented reality with OpenCV.


  • opencv-python
  • numpy
pip install -r requirements.txt

License 📜

This project is available under the MIT license. See the file for more info.

ORB in OpenCV

ORB (oriented BRIEF) keypoint detector and descriptor extractor.

The algorithm uses FAST in pyramids to detect stable keypoints, selects the strongest features using FAST or Harris response, finds their orientation using first-order moments and computes the descriptors using BRIEF (where the coordinates of random point pairs (or k-tuples) are rotated according to the measured orientation).

Source: ORB openCV

Brute-Force Feature Matching

Brute-Force matcher is simple. It takes the descriptor of one feature in first set and is matched with all other features in second set using some distance calculation. And the closest one is returned.

For BF matcher, first we have to create the BFMatcher object using cv2.BFMatcher(). It takes two optional params. First one is normType. It specifies the distance measurement to be used. By default, it is cv2.NORM_L2. It is good for SIFT, SURF etc (cv2.NORM_L1 is also there). For binary string based descriptors like ORB, BRIEF, BRISK etc, cv2.NORM_HAMMING should be used, which used Hamming distance as measurement. If ORB is using VTA_K == 3 or 4, cv2.NORM_HAMMING2 should be used.


Source: Brute-Force Feature Matching

Key Points Matching


PolyLines Drawing


Mask Creation on Detected Homography



Importing openCV and Numpy

import  numpy as np
import cv2

Setting a Standard size for Input Images

stdShape = (275,440)

Taking Webcam and Image Input

webCam = cv2.VideoCapture(0)
imgTarget = cv2.imread("assets\imgTarget.jpg")
imgTarget = cv2.resize(imgTarget,stdShape)
displayVid = cv2.VideoCapture("assets\displayVid.mp4")

Creating a ORB descriptor for Feature Detection

ORB = cv2.ORB_create(nfeatures=1000)

Identifying features using ORB.detectAndCompute Method.

keyPoint1, descriptor1 = ORB.detectAndCompute(imgTarget,None)

Drawing Features on the Target Image

imgTarget=cv2.drawKeypoints(imgTarget, keyPoint1,None) 

Initializing a conditional loop, Checking for Webcam input, And then using ORB.detectAndCompute method for spotting features in the webcam.

while webCam.isOpened():
    _ , imgWebcam =  
    keyPoint2, descriptor2 = ORB.detectAndCompute(imgWebcam,None) 
    imgAR  = imgWebcam.copy()
    _ , imgVideo =
    imgVideo = cv2.resize(imgVideo, stdShape)

Using Brute-Force Feature Matching for Comparing Target Image and Webcam features.

    bruteForce = cv2.BFMatcher()
    matches = bruteForce.knnMatch(descriptor1,descriptor2,k=2)
Filtering out good matches only 
    goodMatches = []
    for m,n in matches:
        if m.distance < 0.75 * n.distance:

Converting goodMatches into openCV accepted Coordinates

    if len(goodMatches) > 15:
        srcPts = np.float32([keyPoint1[m.queryIdx].pt for m in goodMatches]).reshape(-1,1,2)
        dstPts = np.float32([keyPoint2[m.trainIdx].pt for m in goodMatches]).reshape(-1,1,2)

Using cv2.findHomography method to find to corners using cv2.RANSAC algorithm

        matrix , mask = cv2.findHomography(srcPts,dstPts,cv2.RANSAC,5)

Using cv2.perspectiveTransform method to convert corners matrix into Standard shape

        pts = np.float32([[0,0],[0,440],[275,440],[275,0]]).reshape(-1,1,2)
        dst = cv2.perspectiveTransform(pts,matrix)

Using cv2.polylines to show the Homography


Warping Display Video into Detected Homography

        imgWarp = cv2.warpPerspective(imgVideo,matrix,(imgWebcam.shape[1],imgWebcam.shape[0]))

Filling the Detected Homography with Black mask

        newmask = np.zeros((imgWebcam.shape[0],imgWebcam.shape[1]),np.uint8)

Intersecting Mask Image with Target Image

        invMask = cv2.bitwise_not(newmask)
        imgAR = cv2.bitwise_and(imgAR,imgAR,mask=invMask)
        imgAR = cv2.bitwise_or(imgAR,imgWarp)

Finally Displaying Output

    if cv2.waitKey(1) & 0xFF == ord('q'):


📬 Contact

If you want to contact me, you can reach me through below handles.

@prrthamm   Pratham Bhatnagar


View Github