[CinCV TNG] Benchmarking different versions of Cinelerra?

Einar Rünkaru einarrunkaru at gmail.com
Wed Jul 19 19:12:01 CEST 2017

On 07/19/2017 09:49 AM, Andrew Randrianasulu wrote:
> В сообщении от Tuesday 18 July 2017 23:26:25 Einar Rc3bcnkaru написал(а):
>> On 07/18/2017 09:43 PM, Andrew Randrianasulu wrote:
>>> Hi, all!
>>> I tried to compare playback performance between three different versions
>>> of Cinelerra, but as Einar correctly warned - benchmarking is not simple!
>>> While for my system some parameters like visible track size
>>> (32->64->128), and resolution of timeline doesn't play such big
>>> difference - on other systems they may slow down things more or less,
>>> compared to decoding/effect
>>> computations/encoding.
>>> So, may be standartized set of Cinelerra (CV | CVE | GG) config files can
>>> be developed, along with publically-accesible test videos in various
>>> formats?
>> Andrew, start creating.
> Yes, I see irony/light sarcasm here.

We lack people who do something for cinelerra. So I had to ask directly

> Well, right now Cinelerra 5.1 (GG) provides good integration of ffmpeg's filters
> subsystem, and  may be one day hw decoding/filtering will be implemented, too
> (but do developers of Cinelerra have required hardware? va-api today exposed on
> Intel and AMD GPUs, and partially on _some_ Nvidia cards via VA_API state
> tracker (open source driver). I mean here decoding side of it, not encoding.
> But Cinelerra already can output processed frames to external encoder, and this
> encoder can be hw-accelerated ffmpeg, right?).

Never tried hw-accelerated decodinig, but I have read that they are not 
high qualty.

I think that hw-accelerated decoding is developed for workflow 
decode-play. But Cinelerra needs workflow decode - convert to internal 
format - modify - convert to playback format - play. The question is how 
effective is getting decoded frame from hardware - there may be 
bottleneck. And the bottleneck may depend on hardware model.
> Speaking about recently-removed direct_copy mechanism - I definitely will not
> object to reimplementation of it on top of (or more accurately - inside?)
> AVlibs, because  modern ffmpeg provides more I-only codecs, and those codecs
> will be accessible via AVlibs for both de- and en- coding. But this is question
> of architecture - can I kndly ask Einar to keep this future use of *new* direct
> copy in mind, while working on all those abstraction features like Vframe etc?

There may be a lot of I-only codecs, but there is a lot of other codecs 
too. My imagination is that input can be I-frames only, but output will 
be usually some other effectively compressed format.
> Also, please leave more comments in code - I found reading simple english
> comments inside source files in quicktime directory very educational. (even if
> not helpful by now, when ffmpeg grow much better number {and better quiality!}
> of de/muxers and de/coders, and become sort of thing libquicktime tried to be,
> IMO)

I know, that I write too few comments, I'll try to improve myself.


More information about the Cinelerra mailing list