Chris Bissenbruch, CEO of deep renderingHe sees many issues with the way video compression standards are developed today. He thinks they’re not progressing fast enough, laments the fact that they suffer from legal uncertainty and reduces their reliance on specialized hardware for acceleration.
“The codec development process has been disrupted,” Bessenbruch said in an interview with TechCrunch ahead of Disrupt, where Deep Render is participating in Disrupt Battlefield 200. “In the lobbying industry, a major challenge is finding a new way to go forward and researching new innovations.”
In search of a better way, Bisenbruch co-founded Deep Render with Arsalan Zafar, whom he met at Imperial College London. At the time, Bissenbruch was studying computer science and machine learning. He and Dhofar collaborated on a research project involving the distribution of terabytes of video over a network, during which they say they experienced flaws in the compression technology first hand.
The last time TechCrunch covered Deep Render, the startup had just done Closed An initial round of 1.6 million pounds ($1.81 million) led by Pentech Ventures with the participation of Speedinvest. In the roughly two years since, Deep Render has raised several million additional dollars from existing investors, bringing its total to $5.7 million.
“We thought to ourselves, if it is difficult to scale internet pipes, the only thing we can do is make the data that flows through the pipes smaller,” Bessenbruch said. Hence, we decided to combine machine learning, AI technology, and compression to develop an entirely new way to compress data to get better compression ratios for images and video.”
Deep Render is not the first to apply artificial intelligence to video compression. Alphabet’s DeepMind has adapted a machine learning algorithm originally developed to run board games to the problem of compressing YouTube videos, reducing the amount of data the video-sharing service needs to users by 4%. Elsewhere is startup WaveOne, which claims that its machine learning-based video codec outperforms all current standards across popular quality metrics.
But Deep Render’s solution is not platform dependent. To create it, Bissenbruch says the company collected a data set of more than 10 million video sequences on which they trained algorithms to learn to compress video data efficiently. Deep Render used a combination of on-premises and cloud hardware for training, with the former having more than a hundred GPUs.
Deep Render claims that the resulting compression standard is 5 times better than HEVC, a widely used codec that can be played in real time on mobile devices using a dedicated AI acceleration chip (for example, the Apple Neural Engine in modern iPhones). Bessenbruch says the company is in talks with three big tech companies — all with a market cap of more than $300 billion — about paid pilots, though he declined to share names.
Eddie Anderson, co-founder at Pentech and board member at Deep Render, shared via email: “Deep Render’s machine learning approach to coding is completely disrupting an established market. Not only is the software’s path to market, but [compression] The performance is much better than the current state of the art. As bandwidth demands continue to increase, the solution they provide has the potential to drive greatly improved commercial performance for existing media owners and distributors.”
Deep Render currently employs 20 people. By the end of 2023, Bissenbruch expects that number to more than triple, to 62.