The VideoVerse

TVV EP 06 - Behnam Kakavand - Low latency and ultra-low latency streaming

October 28, 2022 Visionular Season 1 Episode 6
TVV EP 06 - Behnam Kakavand - Low latency and ultra-low latency streaming
The VideoVerse
More Info
The VideoVerse
TVV EP 06 - Behnam Kakavand - Low latency and ultra-low latency streaming
Oct 28, 2022 Season 1 Episode 6

What variables do you need to be aware of when deciding the level of latency you need for your stream? Low and ultra-low latency is a hot topic in sports, live, poker tournaments, or online betting. Video R&D lead engineer at Evolution Gaming Behnam Kakavand, joins us to discuss some things you might want to consider as you make these decisions.

Watch the full video version.

Learn more about Visionular and get more information on AV1.

Show Notes Transcript Chapter Markers

What variables do you need to be aware of when deciding the level of latency you need for your stream? Low and ultra-low latency is a hot topic in sports, live, poker tournaments, or online betting. Video R&D lead engineer at Evolution Gaming Behnam Kakavand, joins us to discuss some things you might want to consider as you make these decisions.

Watch the full video version.

Learn more about Visionular and get more information on AV1.

[Narrator] Welcome to The VideoVerse

Benham: Hi, I'm Benham Kakavand and I work for Evolution.

Mark: Benham it's really great to have you here on the inside The VideoVerse podcast. Today, we're gonna talk about, is your stream low latency or ultra-low latency? Real-time video communications interactive video is absolutely coming. And it's really exciting, there's a lot of great applications for entertainment, for play, for work, for education. So why don't we start with you telling us what is the difference between, as you see it, between low latency and ultra-low latency? Are they one in the same?

Benham: Well, actually first, we have to discuss what is latency when we are talking about the latency, what do we mean? And in the video streaming industry, that means that what happens in front of the camera or anything that generates the video and how long will it take for the user to see that action that what's happened there. And we are basically giving that explanation as the latency. When we are talking about the latency, and low latency, and ultra-low latency, it's usually, yes, of course, it's a buzzword these days, and everybody has the low latency solutions, not a bad thing, actually, it's a very good thing.

But the difference is it's actually very, very dependent on who are you talking with? And usually, but conventional wisdom is that the, anything lower than five seconds of latency, meaning that whatever happens in front of the camera, if the user sees it in less than five seconds, usually between three, two seconds to five seconds, we are calling it low latency. That's glass-to-glass latency. And if it is lower than one or two seconds, again, depends on who you're asking. It is called ultra-low latency. For us for example, below two seconds, we are in Evolution considering the ultra-low latency. And it is an important topic for sure.

Mark: Yeah. What are the, because there's some technology differences, maybe we should touch on that also to set the stage here as we talk about this. So what are the technologies for low latency versus an ultra-low latency type solution?
Can you stream, for example, does HLS work at one second, two seconds?

Benham: Absolutely not, actually. The regular HLS, no, no luck with that. With the regular HLS, if you tune everything down and then make, basically optimize everything from the cameras all the way to your player and distribution and everything in between, you will get something around 10, 15 seconds, maybe nine at most, like seven, six seconds, but you have to basically sacrifice so many stuff. There are alternative solutions for you to get below five seconds. Namely the Apple's low latency HLS, there are other not-so-standard, non-Apple version of the HLS.

Of course, they're calling it the CHLS like LHLS or others. And there's LL-Dash, low latency dash as well. All of these are HTTP-based solutions. And I highly recommend to watch the excellent track-by-wheel law at the D-max and who he's talking about all of the different solutions that provided these, facilitate these low latency streaming. And if you want to go anything lower than that, there's absolutely no luck with the HTTP-based solutions. And even on the lower end of the low latency streaming, the HTTPS are going to create some issues.

Mainly for example, measuring the bandwidth. With various small segment sizes, it is quite troublesome when you are trying to measure the bandwidth, which you are going to need to provide adaptive bitrate and adaptive playback. It is becoming pretty much hard, it's a hard problem. But if you want to win anything lower than that, currently, the best solution is the WebRTC.

And it is the thing that we are using for interactive video for web conferencing and stuff like that for cloud gaming. Basically, WebRTC can provide latencies below 500 milliseconds, half a second. And it is the standard solution right now for ultra-low latency. But there's a big gap between basically that two, three seconds and these half seconds, which is not a standard latency. No standard solution basically facilitates that one.

One instance is our, an example is our solution. We don't need like 500 milliseconds. Of course, if we can get it, we'll be happy. But above three seconds is too much for us. And this gap is an empty gap, and nobody facilitates us, so we had to do our own thing. So then we can talk about that further.

Mark: Yeah, can you explain what the use cases are for the various services that you operate?

Benham: Sure. Well, first, let's talk about the use cases of the low latency. Why do we need low latency? It may sound obvious to the video people, but let's discuss on that. One, the thing is that whenever there's time sensitivity involved between the content and you have to deliver it as fast as possible, then latency becomes a major factor. Something, the major thing that you have to fight with in order to get, for example, give you an example. If you are watching a live sports game and you are watching it on your mobile phone over streaming service, and your neighbor is watching it on their TV broadcast TV.

And you know that for the broadcast TV it has been said that anything between five, six, seven seconds is a standard latency. If you suddenly hear them through the walls that your favorite team has scored the goal, then it ruins your experience completely. The excitement of the live game is out of the game.

Mark: That's right, It's gone.

Benham: Exactly. So you have to be on at least on part with the broadcast. And if you are going there in case of the live sports five seconds, four seconds is good. But if we are talking about the interactive experiences, anything like the calls or the games or gambling or the auctions, these are also time sensitive and requires an interaction between the two sides of the stream. Someone who's in front of the camera and the user is watching it. And in these cases, the amount of latencies become even more, more so to say critical.

And you have to, if you are just, if so two people are talking to each other like us right now, if you have to wait for like even five seconds, four seconds for me to reply, that just ruins the experience again. So in it's as much as low we can go with the latency.

In our case, at the evolution our business is basically providing the live casino. And in live casino, there isn't the big element of interaction between the game presenters and the game itself on what happened in the studio, at the table. And the players were basically, the users were playing that game. So if there is a huge gap between what is asked of the users by the game presenter and the action that is required by the player, then it doesn't feel like a real game. Something that people are actually experiencing in reality. And our goal is to recreate that realistic experience. So latency becomes a major thing that we have to overcome.

Mark: Interesting. There's, I like to call it the low latency triangle trade off , which is true regardless of a latency discussion or not. But in video, in coding, as we all know, you can have quality, you can have reliability, you can have scale. And of course, we can parse that with other factors, but at least for this discussion, I think those are most relevant. Certainly in your business, reliability is critical scale. You need to be able to support a lot of potentially simultaneous viewers or there's different applications there. Talk to us about how you think about those trade offs. And if you can, maybe share where you have chosen to make some trade offs, why, and then where you haven't, when we think about quality and reliability of service, and then just the ability to scale.

Benham: In case of the reliability and the scale, what we have done, we had to build everything ourselves to be able to service our videos or our games in that very narrow gap of between two, three seconds and one second. And now we get to the third one, the quality, this is the part that we are trying our best to get as much as we can. But that's the hardest part to be honest with you because of the time and one way to achieve, if you, I believe that most of our listeners are very familiar with this topic, but if you want to get a good quality, you must, again there is the same triangle there of the resources, of the time and the bitrate and the quality and stuff like that.

And again, over there, because of the nature of our content, we have to sacrifice the time. So we have to compress our videos as fast as we can. So we have to sacrifice the quality, because if we are going to sacrifice the bitrate part of it since our videos are our sole source of income, if they are too huge, I mean, the volume wise. if they are, the bitrates are too high, we cannot deliver it to many more people. And that will cut our benefits.

Mark: Are you using software or hardware encoders, or a mix, and describe as much as you're able to, what does your system look like? How is it built?

Benham: Sure. Well, it's a very interesting solution under the hood. And we've spent a lot of time, huge amount of manpower to build it. And on the encoder side of the thing, on the corporation side of thing, right now we are just using the software-based encoders and it's Liba x264 basically. Some, a thing that everybody is using. The reason for that is that it provides us and everyone else with so much, so much, so many options and so much freedom that you could choose and basically fine tune your encoder process to get two out of those three aspects of the bitrate quality and the resources.

Mark: And for you H264 also covers literally every device that you would want someone to be able to receive your content on, so.

Benham: That is correct, that is correct. And yes, we are tuning it, highly tuning it. The term content encoding is very, very familiar to everybody and we have to do sort of the same thing, but in the live thing, live video streaming par our games. So each and every one of our encoders are tuned to their specific game that they are working on to compress.

Mark: So explain that. I know we're not, now we're starting to talk about encoding, but Hey, we're both encoding geeks here, so .

Benham: Well, the thing is that it's where everything starts.

Mark: How is it? I guess what I'm curious is how are the games different? Like just in my mind, I'm thinking, well, why would one game be different from another? You have a, what do you call them again? It's not a dealer. Or do you actually call,

Benham: No, it's game presenters.

Mark: Game presenter, yeah. But it's like a dealer, like in a casino. Okay, so you have a game presenter, you have a table of some sort, you have a backdrop, like, how is it different?

Benham: That's the thing. For the traditional casino games, that is true. Almost all of them look the same, very static, not much is changing.

Mark: Because it's a studio, right? You just have a lot of sets and cameras and lights and everything.

Benham: More than thousands of them, actually.

Mark: Amazing, amazing.

Benham: And the thing is that yet for the traditional games, that is absolutely correct. But the other thing that we are doing is that to create, try to create entertainment. It's not just the casino thing. We have newer generation games, VR games, mixed reality and then virtual and real studios, stuff like that. And those are very, very complex games. Mixing the traditional casino games and PPT modern technology. And they're very, very entertaining as well, but very highly challenging when it comes to the encoding, to distribution, to playback. Because of the variety of scenes and complexities in the single game and amount of time that we don't have to encode them and push them to our CDM and network and eventually to our players devices.

So that makes it very, very complicated and requires lot of time and expertise to spend on them, to attune them so you can find the good balance. Where you don't spend too much time, so you are not increasing the latency. And while we are trying to keep the bitrate graphs basically smooth as the smooth as possible because jumps and in the bitrate will at the end of the day create bad experience for the users in terms of quality, in terms of possibility in playback stalls. So yeah, that is a big challenge for us on that side, I think.

Mark: So I took you on a little bit of a detour Where you were explaining about the content aware content, adaptive approach. What else can you tell us about how you're approaching that? And, you're using x264, assume that CRF as a part of the picture, is it, is it not, is that a part of your optimization scheme? Tell us.

Benham: With CRF is and isn't part of the scheme. In some cases we are using the CRF, but in some other cases we cannot use the CRF. Because of the exact thing that, CRF is very, very smart in need to create an output that basically has dedicated enough bits to it. Not more than it needs, not less than it needs. The exact amount it needs.

But again, that's may cause in case of some of our games, which has a very, very static scene and suddenly changes to a highly, highly complex, lots of motion and small details that would create a huge jump and spike in the bitrate. That is not good. This is again, the part that we have to sacrifice the quality. Of course, we know that the CRF will chap CRF, so to say, creates the best, best quality, but it also sometimes may be a slower than you just set the bitrates.

And one aspect of the encoding in low latencies that you have to let go of look ahead, almost all of the look ahead. And the look ahead, just to give a brief explanation is the way that for the encoder to look what, look ahead of what's happening right now, and being able to use that information in current encoding process. So to be prepared for what's come next and optimize the encoding based on that. But if you don't have the luxury of look aheads, then it hurts the quality and the bitrate a lot.

So for that reason, we have to not use them or use very limited number of them. And that's one thing that we are dealing with in our encoding part of the chain. And after that, we are getting to the distribution part side after we generated the video compressed. And just before going to that, I should tell you that there are sometimes latencies coming from a places that you don't expect. And we do see sometimes huge amount of latencies coming just from the cameras and all the equipments that are delivering raw feeds from studios to our encoders.

Mark: Maybe it's helpful, just give a really quick overview because a lot of our audience is probably used to getting a mezzanine file, that's the input. It's a mezzanine file and they may not be completely aware of all of the physical devices that are in the chain. So when we talk glass to glass latency, I run into this a lot. I spend most of my time as I think a lot of the audience knows and the video encoding side. And so people will wanna know about what's the latency of the encoder. And, my standard response is it's irrelevant because it is so tiny in the whole glass to glass, that's the wrong question often. Obviously yes, there's some specifics, so I'm grossly oversimplifying. But share with us what that workflow looks like. Like even just what the production workflow looks like. I think that would be interesting.

Benham: Sure. All of our, again, glass to glass latency, is the keyword here. All of our workflow has been there designed and implemented in a way to minimize that latency. And the way that we are working is that we have cameras in the studios and the studios, they are connected via the SDI cables to the switches and then goes directly to the encoders. Our encoders are sitting technically next to the studios in a server room each and every one of our studios.

And the reason for that is that we want to receive raw uncompressed, we just want to compress video once not more than once. And this is the part that I receive the question a lot, around, I mean. People are asking me, why are you not using cloud based encoding encoder. And the one simple, one of the simple answers to that is that for thousand studios, each one of them require four or five qualities. The cost, it's just going to be too huge for us to host all of our encoders on the cloud, but we are.

Mark: And are you running on Intel machines, AMD, a mix?

Benham: Right now we are using Intel, but we are experimenting with AMDs as well.

Mark: When you have 64 physical cores you can do a lot .

Benham: Exactly, yes. Just seeing the number of cores are very, very interesting. But you have to think about this part of the equation, as well is that many of the things in the video encoding part of the whole chain is the inside the encoder, are sequential things. Not having like, there might not be that much benefit between 64 cores or 6,400 cores. How much power each one of those core is offering. And some features that these CPRs are providing like the AVX, AVX2, stuff like that.

Maybe more of course, on the Intel side of the thing, there may be more important than the number of cores themselves, but nevertheless, yes, you're right. We are experimenting with the AMD, I must say that. And it's very interesting. We should have the results soon and then decide what will be our course of action for future games, yes.

Mark: But let me just make sure I heard you correctly. So what I did not hear you say about the cloud, and maybe I cut you off, you didn't finish your thought. You said cost was the issue. You did not say latency. So did I hear you correctly? Or is there still a latency consideration?

Benham: Well, yes. Cost is one thing, but of course, you can't deliver to cloud very fast using some pairing, some very, very good dedicated network lines and very, very light encoding in the studios, of course. But to be honest, nothing is free. Cost, money wise is one of them. But the other thing is time. If we want to send it to the cloud, we have to transcode, sorry, encode, because we are receiving the raw input. We have to encode it once and then send it to the cloud and transcode it there.

So it's will definitely add to the time. And to be honest with you, if you are aiming for the lowest latency possible, then the encoder latency becomes non-negligible unless you do some tweaks into it, go deep into the code and not everybody is willing to do that that and spend and invest that much in that part of the tick. So for us, yes, cost and latency after that are the two factors. And I said our whole infrastructure and our whole solution is based on getting lowest latency possible.

So what we are doing, we are encoding On-Prem, and then push those qualities outside of our encoders into the cloud. After that it goes into the cloud and just get distributed around the world basically. And yeah, that's start part and encoding part. And then we get to the distribution side of the team. And I must say that, especially if you are operating live in and around the Europe and North America, the distribution latency wouldn't be that much of a burden with current solutions, which are mainly rely on the RTMP to move the stream around.

But if, as soon as you are trying to reach the places away from these two, see the Europe and North America, like the East Asia, then you are getting into problems. The latency becomes an issue, especially with the RTMP. And this is one of the areas that we are currently working on trying to find the solution around.

And there is a main reason for that looks like that, there is a big gap between the, almost on all of the networks between the hops in Europe and Asia. And there is a huge gap in there, when you are using something like the RTMP, because it's built on top of the TCP, then you will get to the problem with the TCP with the head offline blocking that it sends the packets, the rates for the acknowledgement from the receiver, and then continues to send if it received the acknowledgement. If the hop is just too big, I mean the step is too big, then they're time to receive those acknowledgements increases and that basically creates a lot of problem and, So that is another part that we are trying to deal with.

But right now we are doing the RTMP in that part. And then we get to the most troublesome part, which is the last mile, as soon as we deliver our video to our edge servers and edge points. And as soon as we are trying to deliver to the users, there is a huge range of conditions there in the front end side of the thing, people are on different networks, on different devices, if they're on the mobile networks. And we see that the major, even for us, the majority of our users are on the mobile devices.

And the mobile connections are much more variable than the home connections or the land connections. So this is an area that is highly interesting and that is the part where the HLS and the WebRTC and all come in to the play. And that is that everyone trying to solve right now. And we did it our way, basically.

Mark: Yeah, understand. By the way, how big is your engineering team? How many folks do you have working on solving these problems?

Benham: Well, when we are talking about the engineering team, it's different. For the video engineering itself, it's around 35 of us, from the QA engineers and. But when we are talking about the engineering again, in the evolution, there is more than three, four hundreds of us. They are creating the games, the network people and all of the securities side of the thing. Because security is a huge thing for us.

Mark: Oh, it has to be Somebody, there's some bad actors out there who would love to attack.

Benham: True. And they are doing it basically 24/7.

Mark: 24/7. Exactly.

Benham: And making sure the integrity of the games is, entirely because that's,

Mark: that's your whole business. If they lose confidence then yeah, that's.

Well, this is really interesting. Thank you for sharing the insights. There must be a few more than one or two, but let's just keep it to a few. What are some lessons learned from looking over the years and maybe even some decisions where you say, wow, we really, we did the right thing. And then , if I had to redo it again, I might choose a different technology approach, whatever. I think that's always interesting to hear.

Benham: Sure. The thing is that what we learned over the years that we had have to, we should have more confidence in ourselves. And one thing is that if you want to do something that is very radically different from the standard use cases out there, something that fits your own business only and no one else's, you have to do it yourself. And what we came to regret was that we had to rely on the third party tools, not that they were bad or anything, but we kind of had to bend to their, the way that those tools and were working and those technologies rather than trying to address what our business and our company needs.

So we came to the conclusion that for many, many parts of our solution, instead of using third party tools and vendor provided tools we have had to it, we should have done it to ourselves. And actually we are started to do that right now, using open source stuff is not always free.

And we have to define what's free and what's not, you are getting something open source, looks like that you are just basically cloning there, gets repository and start using it. But usually it's not that easy. You have to modify it, you have to maintain it and yourself. Sometimes it ends up being more expensive in terms of the money paid for the people that you have to hire to maintain and customize these tools and technologies for you.

But when it comes with the vendor provided solutions, usually for the vendors, it makes most sense if they are basically make something that can address the need of the large number of other people. And they are trying to create something that is usable for biggest audience possible.

So they are, and that's a good thing, having the standard there, it's absolutely a good thing. I'm not saying it's a bad thing. But the thing is that if your business like ours is not necessarily uses or fits those use cases, that those standards are trying to address, then you're going to be in trouble.

But for example, I give you an instance of it. WebRTC, it is an amazing piece of technology. I've seen in case of the places like the cloud gaming, it is just magnificent with the way that it works. Although at the expense of the, for example, like you have to spend 20 to 30, 40 megabits per second to get a decent HD quality. But again, you are pressing a button and the character just feels like as if there is no latency on. Of course there is something there. But the problem with the WebRTC analogy is that, yes, we love to have these amount of latencies minimum, but we have studios that we spend a lot of money to build them. And in the games that we spend a lot of money to build them and we want them to look good. And if we want them to look good on the WebRTC, we have to go for 30, 40 megabits per second. One of the reasons being that's in the WebRTC, you have to use, when you are using the H264, you have to use the baseline basically, baseline and profile.

Mark: And you're talking about 1080P resolution?

Benham: Correct. Yes, 1080 at 25 frame per second. Although I've seen it posted on Twitter that somebody tested the high profile on the WebRTC, but I'm not sure what was it. But the risk is there that if you are doing some magic there, with the packets and headers and stuff like that, then it becomes troublesome. and you are trying to get a playback on the generic browsers. So there's a risk going there.

So for us, in order to be able to make something that looks better than the WebRTC streams, but is much faster than the HLS and LL-Dash and all the stuff, regular dash, sorry, then we had to create our own. For example, we are doing the RTMP over web socket and recently we are using fragmented MP4 over web socket. So we are opening the web socket and pushing the streams directly to the users. Well, that comes at the cost of course, but it was all about the cost. We have to have larger servers each servers in order to be able to scale it.

Mark: And you also have to have a custom player. I mean, you have to have your player, but you've always had a player. Is that correct?

Benham: Yes. Well, the thing is that our video is tied to what's happening to the games and it's not just the video. There is huge amount of data that is run in the front end, creates a game and they have to be synced together, whatever happens in the game UI and the video itself. So yes, in Mitchell, yes we have a player custom player.

Mark: You must have a big player development team.

Benham: Oh, yes.

Mark: Okay. So out of those 300 or even 400 engineers, how many are working on the player? Must be pretty big number.

Benham: Well, the thing is that, not that big, we are lucky to have very, very talented people in our team. And video itself is, have like 35 of those people in the video team. And what's happening there is that like, we have like five, six of them on the front end and few of them are working as the developer core developers on them. And we are using the technically, we are using the open source stuff, but modifying them them heavily. And in some cases we had to develop our own completely from ground up players. But in case, like for example, with the iPhones, which are very, very restricted in terms of what you can play and then get the playback on them, what we've done, we are still using the HOS there. That's not the optimal solution right now, but you're looking for alternative.

Mark: Oh, that's interesting. So for iOS, which of course, it makes sense, you have to use HLS, but on Android or some other connected devices, you're able to use your customer approach with the website opening up the web socket.

I remember you telling me about that when we chatted, I don't know, a number of months ago. And I thought, wow, what an ingenious approach for your application. Now, again, there's other people that would say, Hey, that's cool, but it wouldn't work for us.

And I think, Benham, this is a, even as we're closing here on this discussion, I think this is actually a really important point and I wanna make it and feel free to respond and amplify, if you agree. Is that so much of video, we hear both from vendors and even our own peers working in engineering, and sometimes you hear things as if they're facts, but they're facts without context. And so much a video the context is everything. So whether it's selecting a technology, selecting a codex, selecting a, yeah I mean, just so many things, it really has to come back to the answer.

If someone says, well, what should I use? First of all, it depends which everybody hates. But instead of saying, it depends, it's like, well, what's the context? Because even this discussion here, we've talked about HLS and low latency HLS and how certain devices that's required and then WebRTC, and here you're needing to span all of those. It'd be a beautiful thing if you didn't have to, 'cause it'd make your life easier. But that's the reality of the world that we're in. And I think that's just an important point to make.

Benham: Correct. Well, there is this very famous thing that's said, everybody says that content is the king. And actually the content is the thing that basically set the context for us. And if you are delivering, so to say, the video, the blockbuster movie and the platform then latency is irrelevant. It's something else. Whatever happened in that movie happened probably last year when they were filming.

But when you are delivering something like the, I don't know when you're talking with somebody, a conference or doing an interview or.

Mark: Exactly, like what we're doing right here.

Benham: Exactly. Then that, you are absolutely correct. The context is everything. There's never straightforward, easy answer to that. Which solution should I use. It depends on the context.

Mark: Yeah, that's great. Well, I am really appreciative and I know the listeners are too with the nuggets that you've shared and we'll definitely have to do this again and continue the conversation.

A lot more that we could talk about. Real quick, what are you excited about? And I'm asking open-ended question from a, you can answer from a technology perspective or just a trend in the market or anything. What are you excited about right now?

Benham: Well, so many stuff right now. I'm excited about so many things that we are working right now, which to be honest with you, I cannot disclose at this point in time, hopefully.

Mark: Just a little inkling, just a little tiny.

The big keyword would be to drop the RTMP, love the RTMP. I have nothing against it, but I love it. But letting go of it will opens so many doors. So many doors, from the multiple codex, multiple features, multiple tracks and stuff like that. It's a different thing. And being able to serve long distances, high quality, highest, even higher than HD, and then in lowest latency possible.

It's an interesting thing. With the advent of UDP based codex, I'm looking forward to them. Things like the Quick, things like the web transport. There recently there is this basically work group that are working on the Quick and media over Quick. And not just using Quick and to delivering to the clients, but why not using it in the back end side of the team. These are the things that get me really, really excited. And of course, the newer generation codex. It is absolutely the time for them. We have trouble and we have VR games which are delivering 4k. And to be honest with you, we need that next thing to make it.

Mark: Because if you're needing, for traditional WebRTC anyway, to get the quality and you need 20 or 30 megabits, just for 1080P H264, like, wow, that just doesn't scale. And then if you wanna push the resolution, yeah you need AV1 or to certain devices, HEVC and or VVC and the other codex, that's great.

Well, super interesting you mentioned Quick, I just made a mental note. Maybe that's something we'll talk about later in our next interview, but I'm gonna get someone on to talk about Quick because I actually am very excited as well. And when Google introduced Stadia a few years ago, and of course they had a whole technical presentation and Quick is, it's very interesting what they're doing to even control, measure the speed of the network and use that as an input for the QP of the encoder, like that is tight integration, it's a frame level. Like that is next level, and I'm a Stadia, I'm actually not a big game person. I enjoy games, but I choose to spend my time doing other things.

But I've had a Stadia membership from the very beginning, barely play it, but when I do turn it on, I have the same experience you have even though I understand consoles are more responsive and I totally get it. And if I, as a hardcore gamer, I would probably have the reaction that a lot of the hardcore gamers have to Stadia. But I am just amazed being a technologist at how well it works. It is just, it is mind boggling to think that all of the compute is happening in the cloud, and here I am with just the lightweight input device and I've got very high quality image. It blows me away.

Benham: True. Just very quick note, I had some experience around the encoding for the cloud gaming, and I've seen that what you're saying that yes, using those information from the client side to adjust your encoding, because you cannot use the traditional way of adaptive bitrate, which cannot create multiple streaming. And it is going to be very, very interesting.

Mark: Yeah, and with the metaverse, which regardless of what that's ultimately gonna look like at the end of the day, whether it's avatars or rendered, other rendered beings or whatever it is, but the fact of the matter is low latency, you're gonna have two-way communication. Eventually, I think the avatars will become very much real life . Whether it's literally me or just somebody I create, I don't know, but it's exciting. Well, Benham thank you so much for coming on inside the VideoVerse and you are one of my first interviews for this new show.

So congratulations. We'll definitely have you on again and really look forward to tracking what you guys are doing. I know Evolution gaming is really at the forefront of high scale, extremely low latency, high quality. You've built a big business and it's exciting. So thank you again.

Benham: Thank you, thank you for having me.

Use cases of the low latency
Pros and cons of CRF
Workflow to minimize the latency
Pros and cons of doing it ourselves or using third party tools
Talking about the team
Context is the king
Current exciting things