We spend a lot of our week pulling heavyweight architectural models out of their natural habitat and making them behave nicely in a browser. This post is a practical look at what actually moves the needle: a clean conversion pipeline, a few targeted algorithmic choices, and — crucially — the right mesh compression for your audience. We’ll share a side-by-side Draco vs MeshOpt test from a recent engagement with a small facilities client who wanted a campus viewer that loads fast on average laptops and tablets. Names omitted; the headaches are real.
Raw BIM is verbose by design. Our job is not to “destroy triangles,” it’s to remove randomness so the GPU and network can predict what’s next. The pipeline below is roughly what we built for the client’s main office building (originally an IFC dump).
Input facts
What we do before compression
max_error <= bbox_diagonal / (2^bits).
Only after these steps do we choose a mesh compressor.
Both are supported by glTF (Draco via KHR_draco_mesh_compression, MeshOpt via EXT_meshopt_compression). Both can yield brutal size drops; seeing 90 to 95 percent reduction over raw floats is not unusual when quantization and index reordering are in play.
Where they differ in practice:
We like both. The trick is picking the one that fits the project’s center of gravity.
For the client, we exported two glTF packages of the same optimized scene: one with Draco, one with MeshOpt. Same LODs, same KTX2 textures, same viewer (WebGL2), same camera path. We measured on a modest 6-core laptop and a mid-range tablet.
Scene after optimization (before mesh compression):
Compression settings
| Variant | Download size (MB) | First pixels (s) | Camera ready (s) |
|---|---|---|---|
| MeshOpt | 63.4 | 1.2 | 2.0 |
| Draco | 56.7 | 2.0 | 3.3 |
“First pixels” is when we show the first LOD tile of the lobby. “Camera ready” is when all tiles in the initial frustum are interactive at LOD 100.
| Device | MeshOpt decode (s) | Draco decode (s) | Peak RAM (MB) |
|---|---|---|---|
| Laptop (6-core) | 0.42 | 1.05 | 620 |
| Tablet (mid-tier) | 1.10 | 2.45 | 660 |
A few notes we wish someone had told us the first time:
MeshOpt’s pipeline favors cache locality and fast, incremental decode. When we send a tile, the viewer can reconstruct the smallest usable LOD quickly, while fetching higher-detail chunks in parallel. Combine that with quantization and you get a smooth ramp.
Draco, by contrast, leans on more aggressive entropy coding. Great for bytes on the wire, less great for CPU on mid-range devices. You can lower Draco’s compression level to improve speed, but then you give up its main advantage.
We also learned (again) that good LODs beat any compressor. Our 20 percent LOD kept silhouettes clean by preserving edge collapses along straight runs and door reveals. That meant users barely noticed the first second of coarse geometry. If your LODs chatter or pop, faster decode won’t save the experience.
We sometimes ship both: MeshOpt for “live” tiles in the user’s frustum, Draco for background or offline bundles. Yes, that complicates the asset build, but it keeps the experience snappy while trimming egress costs. The viewer just picks what it needs.
To choose quantization bits per mesh, we measured a per-axis error bound with
err = bbox_size / (2^bits)
and estimated projected screen error as err * pixels_per_world_unit. If that stayed under 0.5 pixels for the closest planned camera, we called it good. Simple, but it prevented us from “optimizing” into shimmer.
We did test a few off-the-shelf pipelines. They were quick to try, but they fought our goals: they either flattened instancing, ignored spatial chunking, or produced LODs that didn’t respect architectural edges. The custom passes were not glamorous — some days felt like sweeping the same room twice — but they turned a twitchy, 8-second wait into a 2-second glide, which was the client’s real ask.
For our client’s campus viewer, MeshOpt won because it reduced wait time where humans notice it most. Draco still has a seat at the table when bandwidth is scarce. Either way, a thoughtful pipeline makes heavy models feel light — no magic, just a handful of decisions that compound.