LED - 7. Distrbuted display system for very large RGB LED matrices

So far I've covered a lot of blogs that mainly use Raspberry Pi + RGB Matrix HAT. The reason is that the display content can be handled freely. In particular, using the OpenCV and PIL modules in Python, various contents can be produced. The RGB LED Matrix is ​​so popular and widely used that there are many commercial products also. Users who unfamiliar with programming can also create their own content using specialized products for the RGB LED matrix, such as the HD-D15. My other blog has covered how to use this product. But I still love the combination using Raspberry Pi + RGB Matrix HAT. In previous posts, we have seen how to use up to 3 LED chains in one HAT. Normally, up to 12 LED Matrix can be connected in one daisy chain, but this value depends on the size of the LED Matrix, the performance of the Raspberry Pi, and the CPU resources required for real-time content creation. Assuming that up to 12 32 X 64 LED matrixes are connected per chain, it is theoretically possible to connect 36 RGB LED Matrixes to one RGB Matrix HAT. The overall size will vary depending on the configuration, but when configured with 6X6, 384X192 pixels can be implemented. Among the aforementioned LED matrix specialized products, there are separate receiving cards.

<LED Display area division>

The method described in the figure above is mostly used in large signboards in commercial. The green line in the figure is the Ethernet cable. Receiving cards can be connected using a gigabit switch, but most transmit and receive cards have two Ethernet ports and use a daisy connection to pass data to the next receive card.

<Receiving card that has 2 LAN ports and 12 HUB75 ports>

Distrbuted display system using Raspberry Pi, RGB LED Matrix HAT

To make my own big signboard, configure Raspberry Pi and RGB Matrix HAT with the same concept.

<Configuration of the distributed system>

The laptop computer can be any computer with Python or OpenCV installed. You can even use another Raspberry Pi. However, it must demonstrate the ability to sufficiently process images or videos with display resolution. Images processed on the laptop will be sent to the Raspberry Pi over the network. If you're sending 30 FPS video, it can require quite a bit of network bandwidth. The higher the video resolution, the greater the bandwidth required. And, as will be explained later, since the transmission image will use an uncompressed format, it requires more network bandwidth than the compressed image. Therefore, it is recommended to use a gigabit switch and cable for stable packet processing. But for still images, not videos, a 100 Mbps switch will suffice.

Image over Ethernet

I will be using OpenCV to process images or videos. Therefore, an image or video frame can be converted into an uncompressed numpy matrix. Therefore, knowledge of transmitting and receiving a numpy matrix using a network is required. I explained this in a blog about OpenCV. It is highly recommended that you read the following blog before continuing this article. In this blog, I explained how to byte-sort a numpy matrix and how to split it when transferring large amounts of data.


Sending still images from a laptop

On a laptop, an image or video frame is read and divided into 2 parts. It then sends the 2 parts to different Raspberry Pis.




'''
This code may run on any PC or Raspberry
'''

import numpy as np
import cv2
import socket, struct,time

CHUNK_SIZE = 8192 * 6

'''
Modify this information for your environment
'''
RPI = [ ("192.168.11.84", 4321), ("192.168.11.91", 4321)]

Image_W = 64* 4
Image_H = 32* 2

sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
bufsize = sock.getsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF) 

print('Snd buf size:%d'%(bufsize))

img = cv2.imread('wide_image.jpg')
H,W,C = img.shape
img = cv2.resize(img, (Image_W, Image_H))

splt_img = np.hsplit(img, 2)
data = []
for j in splt_img:
    data.append(j.tobytes())

chunks = []
snd_chunks = []
chunk_len = []
s = time.time()

'''
The chunk and struct packing work is done in advance to finish the transfer quickly
'''
for x in range(0,2):
    print(x)
    chunks.append([data[x][i:i+CHUNK_SIZE] for i in range(0, len(data[x]), CHUNK_SIZE)])
    chunk_len.append(len(chunks[x]))

    for i, chunk in enumerate(chunks[x]):
        if(i == chunk_len[x] - 1): #last
            chunk = struct.pack("<I", 1) + struct.pack("<I", i) + struct.pack("<I", chunk_len[x]) + chunk    # len(data) + 12 bytes , "<I" : < means little-endian, I means 4 bytes integer
        else:    
            chunk = struct.pack("<I", 0) + struct.pack("<I", i) + struct.pack("<I", chunk_len[x]) + chunk    # len(data) + 12 bytes , "<I" : < means little-endian, I means 4 bytes integer
        snd_chunks.append(chunk)

total = 0
'''
To end the transfer at the same time , Send alternately.
'''
for x in range(chunk_len[0]):
    sock.sendto(snd_chunks[x], RPI[0])
    total += len(snd_chunks[x])
    sock.sendto(snd_chunks[x + chunk_len[0]], RPI[1])
    total += len(snd_chunks[x + chunk_len[0]])
    print('Total sent:%d'%(total))

e = time.time()
print('time:%f'% (e - s))
<net_send_image.py>

The parts to be carefully examined in the above code are as follows.

  • Split the image into two using the numpy hsplit function.
  • Split the splited image into CHUNK_SIZE units. You may customize the CHUNK_SIZE value.
  • Create a 12-byte header at the beginning of the image packet and add it. This header informs the packet loss, reordering, duplicate processing, and transmission completion.
  • Packets with headers are byte ordered for network transmission.
  • To complete the simultaneous transmission, packets are alternately transmitted to two Raspberry Pis.

Still image display on 2 Raspberry Pis

I'm going to use 8 64x32 LED panels. We will make 4 displays into a single display and connect them to a total of 2 Raspberry Pis. I intentionally configured the LED matrix chaining method differently. It is okay to configure it in the same way, but it is configured differently to show the independence of each module.




In Raspberry Pi, you can restore the data received over the network to a numpy matrix and then send the image to RGB LED matrices.


import argparse, sys
import numpy as np
import cv2
from PIL import Image
from rgbmatrix import RGBMatrix, RGBMatrixOptions
import socket, struct,time
 
parser = argparse.ArgumentParser(description="RGB LED matrix Example")
parser.add_argument("--chain", type=int, default = 1, help="chain count") 
args = parser.parse_args() 
 
 
CHUNK_SIZE = 52000
SERVER = ("0.0.0.0",4321)
'''
Display size information
'''
H = 64
W = 128
C = 3

'''
I will flip half of the image. Because I'm using one HUB75 chain.
See https://iot-for-maker.blogspot.com/2020/01/led-5-lets-make-large-led-display-part_21.html
'''
if args.chain == 1:
    y_end = int(H / 2)
elif args.chain == 2:
    y_end = int(H)
else:
    print('Invlaid chain number:%d must be(1 or 2)'%(args.chain))
    sys.exit()    
x_end = W

# Configuration for the matrix
options = RGBMatrixOptions()
options.cols = 64
options.rows = 32
options.chain_length = int(4 / args.chain)
options.parallel = args.chain
options.gpio_slowdown = 3.5
options.show_refresh_rate = 1
options.hardware_mapping = 'regular'  # I'm using Electrodragon HAT
matrix = RGBMatrix(options = options)


sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind(SERVER)
bufsize = sock.getsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF) 
print('Rcv buf size:%d'%(bufsize))

if CHUNK_SIZE < bufsize:
    CHUNK_SIZE = bufsize

sock.settimeout(20)
total = 0
buf = []
packet_cnt = 0

def reset():
    global buf, total, packet_cnt
    total = 0
    buf = []
    packet_cnt = 0

while(True):
    try:
        data, addr = sock.recvfrom(CHUNK_SIZE)
        total += len(data)
        key = int.from_bytes(data[:4],byteorder="little")
        seq = int.from_bytes(data[4:8],byteorder="little")
        cnt = int.from_bytes(data[8:12],byteorder="little")
        buf += data[12:]
        packet_cnt += 1
        #print('Total rcv:%d Key:%d, seq:%d total chunk:%d'%(total, key, seq, cnt))
        if key == 1:    #last
            if(packet_cnt != cnt):
                print('Total rcv cnt:%d total chunk:%d'%(packet_cnt, cnt))
                reset()
                continue
            img = np.asarray(buf, dtype=np.uint8)
            img = img.reshape(H,W,C)
            
            if args.chain == 1:
                '''
                split the image top, bottom
                '''
                top_img = img[0: y_end, 0:x_end]
                bottom_img = img[y_end: y_end * 2, 0:x_end]

                '''
                flip the top image
                '''
                top_img = cv2.flip(top_img, 0) #vertical
                top_img = cv2.flip(top_img, 1) #horizontal    
                img = np.concatenate((top_img, bottom_img), axis=1)   #stack horizontally 
                
            final = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)     #PIL use RGB Format
            #print('final image size:H%d W:%d'% (final.shape[0], final.shape[1]))

            im_pil = Image.fromarray(final)
            matrix.Clear()
            matrix.SetImage(im_pil, 0)
            reset()

    except KeyboardInterrupt:
        break
    except socket.timeout:
        reset()
        continue    
<net_rcv_image.py>

Run the above code on two Raspberry Pis.


#Run this command on the 1 chain Raspberry Pi
python3 net_recv_image.py

#Run this command on the 2 chains Raspberry Pi
python3 net_recv_image.py --chain=2

Be Careful : The two Raspberry Pis differ in the way they connect the chains. The first uses one chain, so the top half of the image must be made upside down, left and right symmetrically. The second uses two chains, so it can be used without image symmetry. See this article for a detailed explanation of these posts.
https://iot-for-maker.blogspot.com/2020/01/led-5-lets-make-large-led-display-part_21.html
https://iot-for-maker.blogspot.com/2020/01/led-5-lets-make-large-led-display-part_84.html

Then run the "net_send_image.py" code on the other PC.


F:\src\IoT\led7>python net_send_image.py
Snd buf size:65536
split image size:H64 W:128
0
1
Total sent:49176
time:0.001973


If successful, you should see the next screen.


You can see that the image sent from the PC was successfully processed by Raspberry Pi. If you don't see the above screen, check the Raspberry Pi's IP address again and check if it is correctly reflected in the net_send_image.py file.

The above image transmission and display did not use double buffer. In the case of still images, there is no problem, but when playing a video, flickering may occur if a double buffer is not used. Therefore, in the case of video reception, it is necessary to modify to use the double buffer.


Send video from laptop

It is not much different from still image transmission. It is transmitted by frame using OpenCV's VideoCapture function.


'''
This code may run on any PC or Raspberry
'''
import argparse, sys
import numpy as np
import cv2
import socket, struct, time


parser = argparse.ArgumentParser(description="Network Numpy Example")
parser.add_argument("--video", type=str, required = True, help="video file name")
args = parser.parse_args()

FPS = 30.0
SLEEP = 1.0 / FPS

CHUNK_SIZE = 8192 * 6

'''
Modify this information for your environment
'''
RPI = [ ("192.168.11.84", 4321), ("192.168.11.91", 4321)]

Image_W = 64* 4
Image_H = 32* 2

sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
bufsize = sock.getsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF) 

print('Snd buf size:%d'%(bufsize))



cap = cv2.VideoCapture(args.video)

while cap.isOpened():
    start = time.time()    
    ret, img = cap.read()
    if(ret == False):
        break
    img = cv2.resize(img, (Image_W, Image_H))

    splt_img = np.hsplit(img, 2)
    data = []
    for j in splt_img:
        data.append(j.tobytes())

    chunks = []
    snd_chunks = []
    chunk_len = []
    s = time.time()

    '''
    The chunk and struct packing work is done in advance to finish the transfer quickly
    '''
    for x in range(0,2):
        chunks.append([data[x][i:i+CHUNK_SIZE] for i in range(0, len(data[x]), CHUNK_SIZE)])
        chunk_len.append(len(chunks[x]))

        for i, chunk in enumerate(chunks[x]):
            if(i == chunk_len[x] - 1): #last
                chunk = struct.pack("<I", 1) + struct.pack("<I", i) + struct.pack("<I", chunk_len[x]) + chunk    # len(data) + 12 bytes , "<I" : < means little-endian, I means 4 bytes integer
            else:    
                chunk = struct.pack("<I", 0) + struct.pack("<I", i) + struct.pack("<I", chunk_len[x]) + chunk    # len(data) + 12 bytes , "<I" : < means little-endian, I means 4 bytes integer
            snd_chunks.append(chunk)

    total = 0
    '''
    To end the transfer at the same time , Send alternately.
    '''
    for x in range(chunk_len[0]):
        sock.sendto(snd_chunks[x], RPI[0])
        total += len(snd_chunks[x])
        sock.sendto(snd_chunks[x + chunk_len[0]], RPI[1])
        total += len(snd_chunks[x + chunk_len[0]])
        # print('Total sent:%d'%(total))

    elapsed = time.time() - start
    #print('elapsed:%f'%(elapsed))
    time.sleep(max([0, SLEEP - elapsed]))
<net_send_video.py>

Video display on 2 Raspberry Pis

It is a little different from the way you display still images. Since the double buffer is used, two frames are received and put into the double buffer.


'''
net_rcv_image doesn't use double buffer, so some flikering might occur.
To remove this problem, I'm going to use double buffer.
'''
import argparse, sys
import numpy as np
import cv2
from PIL import Image
from rgbmatrix import RGBMatrix, RGBMatrixOptions
import socket, struct,time
 
parser = argparse.ArgumentParser(description="RGB LED matrix Example")
parser.add_argument("--chain", type=int, default = 1, help="chain count") 
args = parser.parse_args() 
 
 
CHUNK_SIZE = 52000
SERVER = ("0.0.0.0",4321)
'''
Display size information
'''
H = 64
W = 128
C = 3

'''
I will flip half of the image. Because I'm using one HUB75 chain.
See https://iot-for-maker.blogspot.com/2020/01/led-5-lets-make-large-led-display-part_21.html
'''
if args.chain == 1:
    y_end = int(H / 2)
elif args.chain == 2:
    y_end = int(H)
else:
    print('Invlaid chain number:%d must be(1 or 2)'%(args.chain))
    sys.exit()    
x_end = W

# Configuration for the matrix
options = RGBMatrixOptions()
options.cols = 64
options.rows = 32
options.chain_length = int(4 / args.chain)
options.parallel = args.chain
options.gpio_slowdown = 3.5
options.show_refresh_rate = 1
options.hardware_mapping = 'regular'  # I'm using Electrodragon HAT
matrix = RGBMatrix(options = options)
double_buffer = matrix.CreateFrameCanvas()

sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind(SERVER)
bufsize = sock.getsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF) 
print('Rcv buf size:%d'%(bufsize))

if CHUNK_SIZE < bufsize:
    CHUNK_SIZE = bufsize

sock.settimeout(20)
total = 0
buf = []
packet_cnt = 0

def reset():
    global buf, total, packet_cnt
    total = 0
    buf = []
    packet_cnt = 0

index = 0

while(True):
    try:
        data, addr = sock.recvfrom(CHUNK_SIZE)
        total += len(data)
        key = int.from_bytes(data[:4],byteorder="little")
        seq = int.from_bytes(data[4:8],byteorder="little")
        cnt = int.from_bytes(data[8:12],byteorder="little")
        buf += data[12:]
        packet_cnt += 1
        #print('Total rcv:%d Key:%d, seq:%d total chunk:%d'%(total, key, seq, cnt))
        if key == 1:    #last
            if(packet_cnt != cnt):
                print('Total rcv cnt:%d total chunk:%d'%(packet_cnt, cnt))
                reset()
                continue
            index += 1    
            img = np.asarray(buf, dtype=np.uint8)
            img = img.reshape(H,W,C)
            if args.chain == 1:
                '''
                split the image top, bottom
                '''
                top_img = img[0: y_end, 0:x_end]
                bottom_img = img[y_end: y_end * 2, 0:x_end]

                '''
                flip the top image
                '''
                top_img = cv2.flip(top_img, 0) #vertical
                top_img = cv2.flip(top_img, 1) #horizontal    
                img = np.concatenate((top_img, bottom_img), axis=1)   #stack horizontally 
                
            final = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)     #PIL use RGB Format
            h, w, c = final.shape
            #print('final image size:H%d W:%d'% (final.shape[0], final.shape[1]))

            im_pil = Image.fromarray(final)

            double_buffer.SetImage(im_pil)
            double_buffer = matrix.SwapOnVSync(double_buffer)                
            reset()

    except KeyboardInterrupt:
        break
    except socket.timeout:
        reset()
        continue    
<net_recv_video.py>


Run the above code on two Raspberry Pis.


#Run this command on the 1 chain Raspberry Pi
python3 net_recv_video.py

#Run this command on the 2 chains Raspberry Pi
python3 net_recv_video.py --chain=2

Then send the video from the PC.


F:\src\IoT\led7>python net_send_video.py --video=Frozen_s.mp4
Snd buf size:65536

If successful, you should see the next screen.

There is some noise in the picture, but it cannot be felt when viewed with real eyes.
I played by receiving 1/2 of the video frames from two different Raspberry Pis. Even though I didn't set up a special sync mechanism, I didn't feel like the two were playing independently. It was as natural as single display.

Wrapping up

In this post, I used two Raspberry Pis to output 1/2 of each image, but with a little application, I can easily handle even larger images. If the performance of the PC processing the video is good and if the screen is divided into NXN in the sending software, it will be able to challenge FHD, QHD even 4K resolution. Of course, it will be difficult to implement easily due to H / W cost and assembly difficulty rather than software.


You can download the source codes here(https://github.com/raspberry-pi-maker/IoT)



댓글

이 블로그의 인기 게시물

LED-12. Displaying HDMI Contents

LED - 5. Raspberry Pi 4 + DietPi Buster + Electrodragon HAT - Part 1

LED-11. Make a very big size 384 X 512 RGB Matrix #6(final)