The Low Latency Streaming (LLS) operates through a three-stage pipeline that ensures low ingress and playback latency across the streaming workflow. Each stage is optimised for performance, scalability, and reliability.
Edge Ingest: The streamer pushes RTMP streams to the nearest Conversant ingest edge node, reducing upstream latency.
Backbone Transmission: Media streams are transmitted over a high-speed, stable, and low-latency backbone from the ingest edge to the core media processing layer. The transmission path is optimised for reliability and minimal jitter across regions.
Media Processing: The media processing layer is powered by high-performance hardware, including NVIDIA GPUs, and enhanced by optimised software algorithms. This combination enables stable, real-time processing of audio and video streams, significantly reducing latency in media processing.
Edge Delivery: Processed streams are distributed through Conversant CDN delivery edges, enabling accelerated, low-latency playback (over multiple protocols, including HTTP-FLV, Low-Latency HLS, HLS, and MPEG-DASH) for end-users.
rtmp://ingest.example.com/myapp/{STREAM_NAME}
.
{STREAM_NAME}
with a custom value (e.g. mystream
), resulting in:
rtmp://ingest.example.com/myapp/mystream
.
/
:
e.g. rtmp://ingest.example.com/myapp
.
mystream
.