PathEngine home | previous: | next: |
The time taken to complete the 3D content processing and memory load are very much dependant on the data passed in, and the parameters and options specified.
If you're experiencing long processing times, or significant memory load, then there are usually a number of things that can be done to improve the situation.
Using a timing progress callback object, i.e. one that logs progress reports with time stamps, is a good way to quickly get an idea about what is going on at the top level of the 3D processing operation, and which of the following optimisation points is likely to apply most to a given situation.
BSP generation for polygon soup data can be very significantly more expensive then for an equivalent scene that is represented in solid form.
See
If you have a low detail representation of the objects in a scene
(e.g. for 3D collision, or level of detail rendering) then use these
instead of the full detail rendering geometry.
(Example code for building directly from some popular 3rd party physics engines can be found
here).
If you don't have a low detail representation then there are a number of tools
available (both free and commercial) for generating low detail representation from
high detail rendering geometry.
Note that, as far as PathEngine's 3D content processing is concerned,
things like texture coordinates, lighting, and so on are not an issue,
so this simplifies the geometry reduction problem.
If there is a significant amount of geometry in a scene which will never be reached by pathfinding agents then it can be worth marking this geometry as excluded from PathEngine 3D content processing.
When a progress callback is passed in, the update function for this callback can be called a large number of times, so it is important to ensure that this function is fast.
Timing 3D processing for the same source content data with and without the progress callback is a quick way to determine if this is an issue.
The key factor in processing times with the voxel process is voxel size.
Larger voxel sizes can mean significantly faster processing.
If you need to increase voxel size it is important to be aware that this can have the
effect of closing things like narrow doorways in the ground mesh result.
One way to counter this can then be to also reduce the pathfinding agent shape used at run-time.
Subdividing large areas of ground can be a performance hog, in some situations,
in particular where there are large ground areas to be processed.
If possible, ensure that the 'stripTerrainHeightDetail' is also being specified,
and attributes for terrain regions are being specified correctly.
Processing large scenes can require a significant working set,
and if the OS needs to start swapping to disk then this can start to
slow things down significantly.
If you need to process large scenes then it's a good idea to ensure that the machines
being used for the content processing have a decent amount of RAM.
Whenever large worlds need to be processed, splitting these up into tiles
(with
This limits the maximum memory load to per tile complexity rather than whole world complexity,
makes the process as a whole scale essentially linearly with respect to world size,
and enables incremental and parallel per tile processing.
Wrapping scene elements is generally a good thing, particularly for the BSP processing, but in certain situations where very large numbers of vertices are being passed in for convex solid generation this part of the process can then potentially become a bottleneck, itself.
A good way to work around this issue (if this happens) is to pre-process point sets per object type,
using
This basically gives you a minimal set of hull points that can be transformed
around for individual placed object instance convex solid definitions,
eliminating the need for any per-placed-object large point-set processing.
Documentation for PathEngine release 6.04 - Copyright © 2002-2024 PathEngine | next: |