Quantcast
Channel: Bartek's coding blog
Viewing all articles
Browse latest Browse all 325

Flexible Particle System - Code Optimization

$
0
0
performance

After playing with the tools we have some more options to improve the performance of the particle system. This time, we need to rewrite some parts of the code.

In total, the particle system runs almost twice as fast as initially! Read more to see what pieces of code were changed.

The Series

Plan for this post

Start

We are starting with those numbers, see the previous post (last results)

Core i5 Sandy Bridge

counttunnelattractorsfountain
171000429.195608.598460.299
181000460.649647.825490.412
191000489.206688.603520.302

Core i5 Ivy Bridge

counttunnelattractorsfountain
171000529.188746.594570.297
181000565.648792.824605.912
191000593.956832.478640.739

(time in milliseconds)

SIMD preparation

Previously I tried to force the compiler to use SSE2 or AVX instructions. As we saw, there was a nice performance boost (around 10% for AVX). But hey... SIMD should calculate things 4x or 8x times faster... so why we got only a little improvement?

In real life it's not that simple:

  • SIMD can do 4 or 8 instructions at a time, but we still need to wait for the memory. See my summary of a talk "Native code performance on modern CPUs" for more information. In general, we can get max 2.5x speedup using SSE2/4, assuming we have ideally 'vectorizable' code. Not all code is in such a perfect state.
  • Current CPU's are superscalar that means CPU can run several different instructions in parallel. Sometimes SIMD code can be even slower than the original code created by a compiler.
  • Additional small problem: SIMD registers needs memory chunks to be aligned to 128bits (16-byte alignment). We need to take care of this when we allocate new memory. So not every variable or array is good for SSE code.

Whan can we do?

  • Since particles operate mostly on glm::vec4 there is a high chance to use the full power of SSE. We use 4 floats per vector, 16 bytes.
  • glm adds a very nice feature glm::simdVec4 which basically adds SSE code to common vector functions. So I simply changed glm::vec4 to glm::simdVec4.
  • Memory must be aligned, so I used _aligned_malloc and _aligned_free.

Some code examples:

// particles.h, in ParticleData class declaration
glm::simdVec4 *m_pos;
glm::simdVec4 *m_col;

// in particles.cpp, generate() method:
m_pos = (glm::simdVec4 *)_aligned_malloc(sizeof(glm::vec4)*maxSize, 16);
m_col = (glm::simdVec4 *)_aligned_malloc(sizeof(glm::vec4)*maxSize, 16);

// particles.cpp, destructor
_aligned_free(m_pos);
_aligned_free(m_col);

The results after changes (Visual Studio):

Sandy Bridge:

counttunnelattractorsfountain
171000387.563495.281394.641
181000417.320529.660426.330
191000447.665563.833450.416

Ivy Bridge:

counttunnelattractorsfountain
171000476.625596.313483.656
181000514.328639.664523.332
191000552.666682.333558.667

Wow: almost 20% of improvement! All by proper data structures (for vectors) and memory alignment.

SSE and AVX instructions

So far we got a nice speed up... Now, let's write some SSE code for most critical loops. Will it run faster?

Euler update, SSE:

__m128 ga = globalA.Data;
__m128 *pa, *pb, pc;
__m128 ldt = _mm_set_ps1(localDT);

size_t i;
for (i = 0; i < endId; i++)
{
pa = (__m128*)(&p->m_acc[i].x);
*pa = _mm_add_ps(*pa, ga);
}

for (i = 0; i < endId; i ++)
{
pa = (__m128*)(&p->m_vel[i].x);
pb = (__m128*)(&p->m_acc[i].x);
pc = _mm_mul_ps(*pb, ldt);
*pa = _mm_add_ps(*pa, pc);
}

for (size_t i = 0; i < endId; i++)
{
pa = (__m128*)(&p->m_pos[i].x);
pb = (__m128*)(&p->m_vel[i].x);
pc = _mm_mul_ps(*pb, ldt);
*pa = _mm_add_ps(*pa, pc);
}

Readability is much worse in this case.

The results:

Sandy Bridge

counttunnelattractorsfountain
171000386.453492.727393.363
181000416.182529.591423.795
191000444.398564.199450.099

Ivy Bridge:

counttunnelattractorsfountain
171000481.172584.086486.543
181000516.271623.136514.068
191000547.034656.517541.258

Not much, unfortunately. This is because of glm::simdVec4 which uses SSE code. So there is no point in rewriting it. We lose readability and the performance gain is questionable.

Pointer aliasing: __restrict keyword

In my previous post I got a very interesting comment from Matías N. Goldberg:

... In my experience you get massive performance improvements by just telling the compiler the pointers do not alias to each other...

Matias suggest using __restrict keyword to tell the compiler that pointers are not aliasing. For instance:

glm::vec4 * __restrict acc = p->m_acc;
glm::vec4 * __restrict vel = p->m_vel;
glm::vec4 * __restrict pos = p->m_pos;

And then, instead of p->m_pos just use pos pointer.

When I did such change in all the updaters (and generators) code I got the following results:

Sandy Bridge

counttunnelattractorsfountain
171000372.641476.820376.410
181000401.705508.353404.176
191000427.588542.794432.397

Ivy Bridge

counttunnelattractorsfountain
171000475.609591.805480.402
181000502.201620.601512.300
191000534.150667.575541.788

This is not a massive improvement, but it's still worth testing.

Random Number Generator

I focused mostly on the updaters part so far. But, generators could also be improved a bit. In this module a random number generator is heavily used. What if we changed it?

Right now there is standard C rand() function called. For a particle system we, probably, do not need to use something more advanced (like normal distribution random generator) - uniform distribution is just fine... maybe there are some faster generators than the default one?

I've searched and found something: here, here and here

I've tried to use this generator:

// http://www.rgba.org/articles/sfrand/sfrand.htm
static unsigned int mirand = 1;
float sfrand(void) {
unsigned int a;
mirand *= 16807;
a = (mirand & 0x007fffff) | 0x40000000;
return(*((float*)&a) - 3.0f);
}

It has a uniform distribution and 23 bits of precision (C rand() has only 16 bits).

The results:

Sandy Bridge:

counttunnelattractorsfountain
171000334.633443.816348.908
181000363.954474.477372.739
191000384.869501.435394.217

Ivy Bridge:

counttunnelattractorsfountain
171000412.172531.586429.293
181000450.146573.073463.037
191000473.518606.759484.880

Wow! Now it is around 28% of total improvement for Sandy Bridge and almost the same for Ivy Bridge.

Wrap up

Final results

CPUcounttunnelattractorsfountain
Sandy191000384.869 (-21.3%)501.435 (-27.2%)394.217 (-24.2%)
Ivy191000473.518 (-20.3%)606.759 (-27.1%)484.880 (-24.3%)

Total (taking times before tools optimization):

CPUtunnelattractorsfountain
Sandy35.5%43,5%39,7%
Ivy33.2%38,2%35,6%

We can 'reverse' those numbers and say that now the attractor effect runs almost two times faster! Not that bad!

Conclusion:

  • Memory alignment and proper data structures are the key factors.
  • Write SIMD code only if needed, usually it is better to rely on a compiler and third party libraries.
  • Describe your code better: for instance using __restrict keyword. That way a compiler can generate better code.
  • Random number generator can make a difference

What's Next

The renderer is very simple so far. Maybe, there are some options to improve its code. For sure, we have to look at CPU to GPU memory transfers and better buffers usage.

References


Viewing all articles
Browse latest Browse all 325

Trending Articles