LED - 4. Let's make a face detacting led display - Part 1 (Design, Prepare Parts)

I explained how to assemble and use RGB LED matrix in a previous post. In this post, we will be practicing with the LED display created earlier. In general, the LED signboard only unilaterally transmits information, and there is no mutual communication with the viewer. My other blog covers how to implement edge AI using NVidia's Jetson series. There is a post about the recognition of human faces. By combining the facial recognition contents and the LEDs discussed in the previous post, we will make an LED that tracks the other's face after determining whether the other person sees the LED.


System configuration

  • Connect your usb webcam to the Jetson Nano. The Jetson Nano determines whether the subject is a human, and then finds the face of the person.
  • It delivers the location of the subject's face on the screen in real time.
  • Raspberry Pi adjusts the position of the LED eyeball position to be displayed on the LED so that it faces the received subject face.




Parts required

PartQty
Jetson Nano1
USB Webcam (Logitech C720P)1
Network switch1
Raspberry Pi 3B or 3B+1
RGB LED Matrix HAT(Electrodragon)1
RGB LED Matrix (32X64)4
 <NVidia Jetson Nano   Logitech C270P          Raspberry 3B            Electrodragon HAT            RGB LED Matrix>

In addition to the above parts, the following accessories are required.
  • Lan Cable : 2 EA
  • HUB75 interface cable : 2 EA
  • Barrel Jack 5V Power Supply for Jetson Nano : 1 EA
  • Micro USB Type 5V Power Supply for Raspberry Pi : 1 EA
  • 5V 4A Power Supply for RGB LED matricis : 1 EA

Check your webcam view angle

To determine the angle of the person's face from the camera center, you need to know the angle of view of the camera you are using.

Connect your webcam to your computer. Then run this python program. I'm going to use 640X480 resolution.

Tips : Higher resolutions reduce the AI ​​processing speed of the Jetson Nano, so low resolutions of 320X240 and 640X480 are adequate.

import cv2
import sys

print('Webcam Test')
cap = cv2.VideoCapture(0, cv2.CAP_V4L2)
if (cap.isOpened() == False): 
  print("Unable to read camera feed")


cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
ret, img = cap.read()
if ret == False:
    print('WebCAM Read Error')    
    sys.exit(0)
h, w, c = img.shape
print('Video Frame shape H:%d, W:%d, Channel:%d'%(h, w, c))


count = 1
while cap.isOpened():
    try:
        ret, img = cap.read()
        if ret == False:
            break
        count += 1
        cv2.waitKey(1)
        cv2.imshow('webcam', img)
    except KeyboardInterrupt:
        print('Ctrl + C')
        break

print('Webcam Frame read End. Total Frames are : %d'%(count))
cap.release()


This is my python output screen.




In my case, the distance betewwn the webcam and the ruler is 200mm, and the width of the screen is 160mm.


Using the trigonometric formula, we get the angle shown in the following figure.


So the angle of 1 pixel is ‭0.068125‬ degree. We'll use this value later.

Be careful : If you change the resolution, the view angle might change, so repeat the above steps to get a new view angle.



Wrapping up

In the next post, we will continue to implement facial recognition in the Jetson Nano.
You can download the source codes here(https://github.com/raspberry-pi-maker/IoT)







댓글

이 블로그의 인기 게시물

LED-12. Displaying HDMI Contents

LED - 5. Raspberry Pi 4 + DietPi Buster + Electrodragon HAT - Part 1

LED-11. Make a very big size 384 X 512 RGB Matrix #6(final)