Skip to content

Philosophy

Dan Cascaval edited this page May 4, 2024 · 3 revisions

Impala

Philosophy

Performance and Parallelism

Grasshopper's provides an extremely flexible programming model, rapid development, and experimental canvas for ideas. However, inherently expensive geometric calculations can create performance bottlenecks. As a result, Grasshopper's usability is limited when operating on larger data sets. Impala attempts to alleviate this by providing targeted speedups for specific expensive operations while behaving exactly like native Grasshopper components in other respects. Grasshopper's explicit dependency graph, immutable semantics, and data-parallel operations make it an ideal candidate for parallelism.

Generative Parallel Library

Impala provides a library of generated, generic methods that support a variety of input patterns and arities. These are generated in order to extract extra speedup and remove overhead from extra loops and allocating C#'s variable argument arrays. These methods fall into several families:

  • ZipNxM iterates Grasshopper's tree looping logic over N inputs, producing M outputs in each case. This is the same as writing a component that uses GH_ParamAccess.Item with the same # of inputs and outputs, but set to compute in parallel.
  • RedNxM loops through inputs where the whole list is considered each solution item, as one would with GH_ParamAccess.List. This is a "Reduction" of the list to a value.
  • ZipKRedNxM takes K Zip inputs and N Reduce inputs and produces M values.
  • ZipNxGraftM takes N items and produces M lists of values. This is used in an operation like curve division.
  • ZipKRedNxGraftM takes K items, N lists, and produces M lists of values.

These functions are called with their inputs, a pure action function to be run in parallel, and an error checker class that can be created once and re-used. The error check is called on each computation unit to validate the input before calling the action function. The developer defines this in the component. Several components also make use of granularity control in order to tune how work is divided. This library is intended to be used by developers implementing parallel components.

Why no BReps?

Please see the active issue. In addition to not being fully thread-safe, BRep operations are famously expensive, and mesh-based alternatives can yield equivalent and much faster results.

Extensions

Impala offers a couple of custom implementations of parallel algorithms (MeshFlow, Visual Center, IsoVists) that don't have a native Rhino implementation. If there's one you're missing and want to see, ask!

Clone this wiki locally