Home > Software engineering >  How to use the video datastreaming I get from nginx server?
How to use the video datastreaming I get from nginx server?

Time:03-07

I have three nodes in my network: dataServer --- node1 --- node2. My video data "friends.mp4" is saved on dataServer. I started both dataServer and node2 as rtmp-nginx servers. I use ffmpeg on node1 to pull datastreaming on dataServerand and push the converted datastreaming to the application "live" on node2. Here's my configuration of nginx.conf for node2.

worker_processes  1;
events {
    worker_connections  1024;
}

rtmp {
    server {

    listen 1935;

    chunk_size 4000;

application play {
        play /usr/local/nginx/html/play;
    }

application hls {
        live on;
        hls on;
        hls_path /usr/local/nginx/html/hls;
    hls_fragment 1s;
    hls_playlist_length 4s;
    }

application live  
    {
        live on; 
    allow play all;
    }
}
}

I want to run this python code to recognize the faces in friends.mp4: import cv2

vid_capture=cv2.VideoCapture("rtmp://127.0.0.1:1935/live")
face_detect = cv2.CascadeClassifier('./haarcascade_frontalface_default.xml')
if (vid_capture.isOpened() == False):
    print("Error opening the video file")
else:
    fps = vid_capture.get(5)
    print("Frames per second : ", fps,'FPS')
    frame_count = vid_capture.get(7)
    print('Frame count : ', frame_count)

while(vid_capture.isOpened()):
    ret, frame = vid_capture.read()
    if ret == True:
        gray = cv2.cvtColor(frame, code=cv2.COLOR_BGR2GRAY)
        face_zone = face_detect.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=3)
        for x, y, w, h in face_zone:
            cv2.rectangle(frame, pt1 = (x, y), pt2 = (x w, y h), color = [0,0,255], thickness=2)
            cv2.circle(frame, center = (x   w//2, y   h//2), radius = w//2, color = [0,255,0], thickness = 2)
        cv2.imshow('Frame', frame)
        key = cv2.waitKey(50)
        if key == ord('q'):
            break
    else:
        break
vid_capture.release()
cv2.destoryAllWindows()

But I can't do it because cv2.VideoCapture can not get the data streaming from "rtmp://127.0.0.1:1935/live". Maybe it is because this path is not a file. How can I get the video streaming received by the nginx server and put it to my openCV model? Is there a way that I just access the dataStreaming received by the niginx server and make it a python object that openCV can use?

CodePudding user response:

Try to change the file to a live stream, then use cv2 to process the stream:

DataServer --> Node1(FFmpeg MP4 to RTMP) --> Node2(Media Server)
Node2 ---> Node1(cv2 process RTMP)

For Node1, you could run command like:

ffmpeg -re -i friends.mp4 -c copy -f flv rtmp://node2/live/livestream

Then you got a RTMP stream and process it on Node1 again:

cv2.VideoCapture("rtmp://node2:1935/live/livestream")

Please note that the RTMP is not on node1, so you should never use localhost or 127.0.0.1 for cv to consume it.

  • Related