Join us

Air Mouse: Doing Mouse Operations Using Finger Gestures

Hey surfer, in this blog, I am going to write about how can we do basic mouse operations like move pointer, click, double click and right click using only finger gestures.

Controlling Mouse with Gestures

Hey surfer, in this blog, I am going to write about how can we do basic mouse operations like move pointer, click, double click and right click using only finger gestures.

This blog is the part of the series #7DaysOfComputerVisionProjects. Links to the blogs and videos of each projects are:

Introduction

As the project name Air Mouse, it is a Computer Mouse except working by the Gestures of fingers. We will be using 2 python libraries, mouse and Mediapipe. Mouse is a library to do mouse operations like click, drag, release and so on. We will be using [Hand Module of Mediapipe a OpenSource tool to extract the landmarks of hand and fingers. But it have multiple modules like selfie segmentation, pose estimation, face detection etc.

Installation

It will be best idea to install these tools on virtual environment.

  • pip install mediapipe for installing mediapipe.
  • pip install mouse for installing mouse package.

Preliminary Tasks

Import Dependencies

Get Screen Size

The use of tkinter is only to find screen size.

Write Basic Functions

  • We need to convert the landmark position from the frame world to our screen world thus the method frame_pos2screen_pos is written.
  • We will be working with Euclidean Distance to make some sense about gestures.

Writing a Code

Step By Step

It is necessary to view the landmark position before making a gesture assumptions. Please follow the below image.

https://google.github.io/mediapipe/solutions/hands.html

  • Start by beginning a camera.

  • Define a frame size in our case 520 rows and 720 columns.

  • For ROI i.e the Region of Interest to care, define a rectangle that resides on the some area inside the frame but make sure it will leave enough space on each side.

  • Take modules drawing_utilities and hands from Mediapipe solutions’s. As the name, drawing_utils will draw landmark here and the hands will let us work with detection models.

  • Define a variable to count the frame and make a constant to check the events on those frame count.

  • Prepare a variable to hold last event name.

  • Prepare a variable events, single click, double click, right click and drag.

  • Now prepare a Mediapipe Hand object by giving arguments like max_num_hands, min_detection_confidence and so on. As name suggests, max_num_hands is to search up to that number of hands and min_detection_confidence is the minimum confidence threshold value of detection and below which, detected hands are discarded.

  • Read a Camera frame.

Flip the frame to look like selfie camera.

  • Resize frame to our desired size.

  • Make a rectangle to show ROI Area on frame.

  • Extract width and height of frame.

  • Convert frame from BGR to RGB because Hand object expects image as a RGB format.

  • Pass the RGB image ot `process` module of `Hand` object to get the result.

  • Now for each hand, we will be extracting landmarks of fingers. Like index finger’s tip, dip, middle and so on. There are overall 21 landmarks for each hand. After extracting, we need to convert it back to pixel coordinate world.

  • If the distance between index finger’s tip and middle finger’s tip is less than 60 then consider that event as double click. i.e touch index finger and middle finger for double click. The value 60 will be relative to the frame size. Else, if last event is also double click, then set last event to none.

  • If the distance between index pip, and thumb tip is less than 60 then consider that event as single click. i.e move thumb near to the bottom of index finger for single left click. Else if last event is also single click, then set last event to none.

  • If thumb tip and index finger tip distance is below 60 then consider that event as left press. i.e if thumb tip and index tip comes near do left button press. It will help us to do selection. Else if last event is also left press, then set last event to release.

  • If thumb tip and middle finger tip distance is below 60 then consider that event as right click. i.e if thumb tip and middle finger tip comes near do right click. Else if last event is also right click, set last event to none.

  • After checking all events, set frame count to 0.

Convert our useful landmarks from entire frame world to screen world:

  • First clip the values to only ROI region.

  • Convert clipped values to Frame World i.e treat top left of ROI as top left of frame and for entire coordinates.

  • Convert frame world point of index pip to screen world point by doing simple unitary method. A method `frame_pos2screen_pos` will do it.

  • Move the cursor to converted position i.e. index pip.

  • Finally, if current frame count has been reseted then apply the event. And increase the frame count.

  • Draw each landmarks.

  • Show the frame.

Complete Code

Better Version of the Code

Finally

The above code works but it is hard to get to the come point and do the desired operation within a while so it is still a bad system. I will be working on above system to try make it more efficient. If you found this blog helpful then please leave us a comment on our YouTube video and don’t forget to subscribe us. The code is available on GitHub.
* Code Link
* YouTube Video Link


Only registered users can post comments. Please, login or signup.

Start blogging about your favorite technologies, reach more readers and earn rewards!

Join other developers and claim your FAUN account now!

Avatar

Ramkrishna Acharya(Viper)

Associate Data Scientist, eXtenso Data

@qramkrishna
I love to write codes that makes Machine Understand things. An AI practitioner, Unity Learner. :)
User Popularity
29

Influence

2k

Total Hits

1

Posts

Mentioned tools