We keep running into the same problem on BIM projects: beautiful, information-rich Revit or CAD models that simply won’t move on a phone, browser, or headset. Millions of polygons, nested families, hidden guts you’ll never see in a walkthrough — everything the authoring tool needs, but overkill for interactive delivery. This piece compares practical algorithms for polygon reduction, shares what’s worked (and what still trips us up), and shows how a modern pipeline can make BIM data dramatically more accessible.
Before arguing algorithms, we separate three levers:
That third lever is often the biggest win. By removing unnecessary interior detail (ducts inside closed shafts, screws inside beams, back-to-back drywall sheets), we regularly cut the poly count far more than decimation alone and create clean Level-of-Detail variants for web or mobile viewing. LOD then becomes a policy, not a single number.
We lean on a small toolkit; none of it is exotic, but choosing the right tool for the right region of the model is where most of the gains come from.
A voxel grid is laid over the model; all vertices in a cell collapse to one representative. It’s near-linear time, extremely predictable, and a great first pass.
The classic workhorse. For an edge v1–v2, cost = error(v1) + error(v2), where error(v) is the sum over incident face planes of (n dot v + d)^2. We constrain borders, hard edges, and any curve tags from BIM to avoid chewing through sharp profiles.
A QEM variant with extra guards: lock edges with dihedral angle above a threshold; keep planarity for wall faces; protect small radii. We also bias costs by semantic class (e.g., “window mullions lose less than walls”).
We resample to target edge length using uniform sampling and tangential relaxation. The result is near-equilateral triangles, which shade beautifully and compress well.
A tiny cheat-sheet we share internally:
| Algorithm | Strength | Use when | Common pitfall |
|---|---|---|---|
| Vertex clustering | Speed | Distant or noisy parts | Blurred silhouettes |
| QEM collapse | Shape fidelity | Most building elements | Rounded corners |
| Feature-aware QEM | Crisp edges | Curtain walls, trim | More tuning required |
| Isotropic remeshing | Clean topology | Organic or site pieces | Moves design edges |
We’ve learned to start with elimination. Three tactics help:
In one recent job for a tiny facilities team (two people, municipal budget), this alone dropped triangles by 72 percent before any decimation. We then applied different strategies per class: feature-aware QEM for facade and stairs, vertex clustering for distant MEP, and isotropic remeshing on terrain.
We don’t argue about “percent reduction” anymore; we talk tolerances in model units:
Small formula we rely on:
sample_error = max over p in samples of min over q in original of length(p - q)
Not perfect, but fast to compute and easy to explain to stakeholders.
Automating this isn’t about one silver bullet; it’s a gaited pipeline. We routinely combine:
Commercial tools can speed up the middle 80 percent and are great for one-click previews. Our experience, though, is that the last 20 percent — BIM semantics mapping, edge locking by category, floor-aware culling — benefits from custom code. We’ve had to write small adapters that look at element metadata and steer the simplifier; generic frameworks don’t know enough about how architects expect a corner to look.
We’ll be candid: automated UV preservation during heavy decimation is still our soft spot. We can maintain lightmaps and native UVs under moderate reduction, but extreme targets sometimes require a repack and a rebake. We try to flag that early.
A niche client — an exhibit designer preparing a web viewer for a library renovation — handed us a Revit export with detailed MEP and furniture, 34.2 million triangles after tessellation. Their goal: smooth viewing on a mid-range tablet and a desktop browser.
Pipeline overview
Results
| Stage | Triangles | Median FPS (tablet) |
|---|---|---|
| Raw export | 34.2 M | 9 |
| After internal elimination | 9.6 M | 21 |
| After targeted decimation | 2.1 M | 43 |
| With compression + instancing | 2.1 M | 58 |
We didn’t win everywhere. Curtain wall corners needed a custom rule to keep mullion continuity; the generic simplifier kept “biting” into the reveal. Also, a few fixtures were incorrectly tagged and got culled; we restored them by whitelisting families by name and min dimension. Still, the team shipped a responsive viewer without hiding important design intent.
We try the shelf first; it’s fast. But unconventional BIM challenges — protecting crisp profiles, reading room boundaries, steering simplification by category — often need domain-aware logic. Our small, purpose-built steps (edge locks, room-aware culling, planarity guards) let us hit tight budgets without the “melted LEGO” look. The trade-off is more engineering upfront and more time spent validating tolerances with stakeholders. We think that’s a fair compromise for interactive BIM that feels right.
If you’re a BIM specialist or a potential client stuck with a heavy model, the takeaway is simple: combine elimination, the right decimator per part, and tolerance-driven LODs. The measurable gains — load time, frame rate, and fewer “why is that corner mushy?” comments — add up quickly. And yes, there will be edge cases (literally). We’re still refining our UV story at extreme reductions, and we’re always tuning heuristics around curtain walls and railings. But the path to fast, faithful, web-friendly BIM is well-lit now, and it starts with not drawing what no one will ever see.