“...That was the original idea from day one - the elimination of the tweening process. But it is certainly not the only feature of Synfig that makes it unique. In addition to eliminating the tweening process, I also wanted Synfig to be used for pretty much every part of production except story-boarding and editing.”
Get email notifications about new versions and important updatesSubscribe
Posted by KonstantinDmitriev on September 8, 2015
On September 1st we have started to work on optimization of Synfig - much thanks to everyone who contributed to our fundraising campaign and made this possible! It's time to share the first results.
As you probably know, we have two repositories where the development takes place. The first one - https://github.com/blackwarthog/synfig/commits/rendering - here we have a Synfig code with the new rendering framework. And the second one is a special "Lab" repository for tests.
Ivan Mahonin (our hired developer) made investigation of possible optimizations for the vector artwork (contours and regions). So, he took the following file for testing:
In the current version of Synfig (old rendering engine) with the Intel® Core™ i7-4790K processor the rendering takes approximately 107.483 milliseconds (1/10 of second). This is a plain software rendering, only single core is used.
Ivan took the vector rendering algorithm from Synfig and copied it to his Lab testing application. The results were quite striking:
You can see the resulting image below (it have different colour because of the gamma settings not applied yet).
In other words, the vector rendering algorithm is very fast itself and the slowdown comes from the bad architecture of the old Synfig renderer. So, the question here is to properly integrate this algorithm back into the new rendering engine by avoiding all bottlenecks.
Also, the current rendering uses only one core, so we have even more possibilities to improve speed by utilizing multiple threads for rendering several layers in parallel.
Next, Ivan started to investigate the possibilities of hardware optimization using the GPU resources. His configuration is GeForce GTX 750. In the same Lab repository he have implemented rendering using the OpenGL's Stencil Buffer. Below you can see the same vector image rendered with antialiasing and without it.
The rendering time for the images:
2.033 milliseconds (antialiased)
1.558 milliseconds (no antialiasing)
In other words, the hardware rendering can be comparable to the rendering at the top 4-core processor.
But for this test we have to add time for loading resources into the memory of graphic card. The sending takes around 2.5 ms, so this is where we have to properly arrange everything to avoid redundant transfers back and forth.
In general, OpenGL optimization looks promising, but we have to consider that it is have its own limitations - most probably we won't be able to implement every effect with it. But for most basic layers it should work very well.
Finally, Ivan made one more hardware optimization test, by implementing vector rendering in OpenCL. It gives a good antialiased image, mostly identical to the one produced by software rendering. But the timings are not that good:
This is slower than software rendering in single thread. On the other hand, OpenCL gives an absolute freedom for implementing any layer/effect and it frees the processor for other operations. It also might happen that it will show better results for other cases besides the vector artwork.
Well, that's all our results for today. As you can see, there is a pretty much room for optimizations in both software and hardware rendering types. The archive with more detailed benchmark data is available for download here.
By this moment our fundraising campaign is already passed 20% towards our funding goal for next month. Much thanks to our latest contributors - andy.chevalier, Stephen Croft, migelito_ca, klwilcoxon, myles.strous, anark10n, Matt Jordan and everyone who donated anonymously. Thank you!
Now Ivan continues his work and I will keep you updated about his progress.