Faster First Pixels or Smaller Downloads?

We turn heavy BIM into web-friendly scenes by removing randomness before we compress. That means finding real instances, baking only the transforms that help, cleaning topology, quantizing positions/normals/uvs, authoring sensible LODs, and packing textures smartly. Once the model is predictable, compression pays off.

In our side-by-side export of the same building, MeshOpt made the viewer feel quicker: first geometry appeared sooner and interaction started earlier, especially on mid-range laptops and tablets. Draco delivered a smaller file by roughly a tenth, which helped on shaky or metered networks, but its extra CPU decode cost delayed that first smooth tumble. The trade-off is simple: if your audience values immediate responsiveness, MeshOpt wins; if bandwidth is your bottleneck and a slightly longer wait is acceptable, Draco pulls ahead.

What surprised us was how much the groundwork mattered. Good LODs and precise quantization hid the early seconds, so users saw stable silhouettes instead of popping. That’s why we sometimes ship both: MeshOpt for what’s on screen now, Draco for everything else. It isn’t flashy, but choosing compression to match how your model will be used turns “wait and watch” into “click and go”—and that’s what most teams actually need.

Common Questions

Q: What’s the single biggest lever before we even talk compression? A: De-randomizing the scene. We detect true instances, keep instancing where it reduces draw calls, bake transforms only when it helps, weld and clean topology, quantize attributes, and author stable LODs. Compression works best on predictable data.

Q: MeshOpt or Draco — how do we choose in practice? A: If your KPI is time-to-first-interaction, pick MeshOpt. If your KPI is minimizing bytes on the wire, pick Draco. We’ve seen MeshOpt draw first pixels sooner, while Draco saves roughly another tenth in size at the cost of extra CPU decode.

Q: How much size reduction is realistic after a sensible pipeline? A: With quantization and either compressor, 90–95 percent shrinkage over raw floats is common. The spread between MeshOpt and Draco was about 10–12 percent in Draco’s favor in our test; whether that matters depends on network conditions.

Q: Why did MeshOpt “feel” faster even when its files were slightly bigger? A: It decodes quicker and streams progressively. Small tiles reconstruct usable LODs fast, so users can tumble the camera in roughly 1–2 seconds instead of waiting for a larger CPU burst.

Q: Can we mix both compressors in one product? A: Yes. We sometimes ship MeshOpt for on-screen, user-facing tiles and Draco for background or offline bundles. It complicates the build, but it balances interactivity with egress costs.

Q: What should teams actually measure when benchmarking their pipeline? A: User-facing moments: “first pixels” and “camera ready.” Total load time can hide pain; shaving a second off first interaction often matters more than winning a synthetic “download complete” number.

Q: If bandwidth is shaky but we still want acceptable interactivity, any middle ground? A: Lower Draco’s compression level modestly to cut decode time, and keep tight LODs and spatial chunking. You’ll give back some bytes but may reach a usable first interaction window without a full switch to MeshOpt.
Contact Elf.3D to explore how custom mesh processing algorithms might address your unique challenges. We approach every conversation with curiosity about your specific needs rather than generic solutions.

*Interested in discussing your mesh processing challenges? We'd be happy to explore possibilities together.*

Making BIM Fly on the Web: What Worked, What Didn’t, and Why Compression Choices Matter

We spend a lot of our week pulling heavyweight architectural models out of their natural habitat and making them behave nicely in a browser. This post is a practical look at what actually moves the needle: a clean conversion pipeline, a few targeted algorithmic choices, and — crucially — the right mesh compression for your audience. We’ll share a side-by-side Draco vs MeshOpt test from a recent engagement with a small facilities client who wanted a campus viewer that loads fast on average laptops and tablets. Names omitted; the headaches are real.


The baseline: heavy BIM is not the enemy — randomness is

Raw BIM is verbose by design. Our job is not to “destroy triangles,” it’s to remove randomness so the GPU and network can predict what’s next. The pipeline below is roughly what we built for the client’s main office building (originally an IFC dump).

Input facts

  • 18.6 M triangles, 12 k nodes, 2.3 k unique meshes
  • Unbaked transforms, repeated geometry as copies, deeply nested instances
  • High-res textures with inconsistent atlases

What we do before compression

  1. Instance detection: hash by topology and attributes, collapse duplicates.
  2. Transform bake: push transforms into vertex data only where it reduces draw calls; otherwise keep instancing.
  3. Topology cleanup: weld within a tolerance, remove degenerate faces, split per-material.
  4. Quantization: positions to 14 bits, normals to 10, uvs to 12. A simple bound we use to reason about position error is
    max_error <= bbox_diagonal / (2^bits).
  5. LOD authoring: three LODs per unique mesh (100, 50, 20 percent).
  6. Texture diet: atlas where safe and transcode to KTX2 (basis) with mip chains.
  7. Chunking for streaming: group primitives by spatial cell and material to keep early tiles small.

Only after these steps do we choose a mesh compressor.

Two good options: Draco and MeshOpt

Both are supported by glTF (Draco via KHR_draco_mesh_compression, MeshOpt via EXT_meshopt_compression). Both can yield brutal size drops; seeing 90 to 95 percent reduction over raw floats is not unusual when quantization and index reordering are in play.

Where they differ in practice:

  • Draco usually produces the smallest files for the same visual quality — great for tight bandwidth or offline distribution.
  • MeshOpt tends to decode faster and supports progressive streaming of index/vertex data with minimal overhead, which matters for that “first pixels” moment.

We like both. The trick is picking the one that fits the project’s center of gravity.

The test: one building, two compressions, same viewer

For the client, we exported two glTF packages of the same optimized scene: one with Draco, one with MeshOpt. Same LODs, same KTX2 textures, same viewer (WebGL2), same camera path. We measured on a modest 6-core laptop and a mid-range tablet.

Scene after optimization (before mesh compression):

  • Geometry payload: 178 MB
  • Textures (KTX2): 42 MB
  • Nodes/materials: unchanged functionally but draw calls reduced by 37 percent

Compression settings

  • Draco: quantization matched to pipeline (positions 14 bits, normals 10, uvs 12), compression level 6.
  • MeshOpt: vertex/index filters on, byte-grained streams, target overdraw optimization.

Result table 1: user-visible metrics

Variant Download size (MB) First pixels (s) Camera ready (s)
MeshOpt 63.4 1.2 2.0
Draco 56.7 2.0 3.3

“First pixels” is when we show the first LOD tile of the lobby. “Camera ready” is when all tiles in the initial frustum are interactive at LOD 100.

Result table 2: cost of decoding

Device MeshOpt decode (s) Draco decode (s) Peak RAM (MB)
Laptop (6-core) 0.42 1.05 620
Tablet (mid-tier) 1.10 2.45 660

A few notes we wish someone had told us the first time:

  • Draco’s extra savings were real — about 11 percent smaller package — but the additional CPU decode time hurt early interactivity for our target devices.
  • MeshOpt’s decode time was much friendlier and let our streaming strategy shine. Users got geometry on screen roughly 0.8 seconds sooner and could tumble the view in ~2 seconds.
  • On wired broadband, MeshOpt “felt” better. On spotty Wi-Fi, Draco clawed some ground back during the long tail of background tiles thanks to less bandwidth pressure.

Why the numbers look this way

MeshOpt’s pipeline favors cache locality and fast, incremental decode. When we send a tile, the viewer can reconstruct the smallest usable LOD quickly, while fetching higher-detail chunks in parallel. Combine that with quantization and you get a smooth ramp.

Draco, by contrast, leans on more aggressive entropy coding. Great for bytes on the wire, less great for CPU on mid-range devices. You can lower Draco’s compression level to improve speed, but then you give up its main advantage.

We also learned (again) that good LODs beat any compressor. Our 20 percent LOD kept silhouettes clean by preserving edge collapses along straight runs and door reveals. That meant users barely noticed the first second of coarse geometry. If your LODs chatter or pop, faster decode won’t save the experience.

When we would pick each

  • Pick MeshOpt when time-to-first-interaction matters (sales demos, site walks, ops dashboards) and your audience is a mix of laptops and tablets. You keep CPU demand reasonable and can stream progressively without clever tricks.
  • Pick Draco when bandwidth is the limiting factor (large deployments over metered or poor networks, global CDN costs) and your audience is likely on desktops or you can afford a splash screen longer than two seconds.

We sometimes ship both: MeshOpt for “live” tiles in the user’s frustum, Draco for background or offline bundles. Yes, that complicates the asset build, but it keeps the experience snappy while trimming egress costs. The viewer just picks what it needs.

The parts we had to grind on

  • Quantization tolerance: 14-bit positions were fine for architectural scale in this project. On the mechanical annex, tight curves in exposed ductwork needed 15 bits to avoid zippering at grazing angles. That bumped geometry about 3 to 4 percent.
  • Instancing mismatches: BIM exports sometimes sneak in tiny UV shifts or material overrides that break instancing. We wrote a normalization pass for materials and UV scales. Generic tools we tried either merged too aggressively (visual artifacts) or not at all (lost the win). Custom was faster in the end.
  • Texture atlases: We saved bandwidth by atlasing but paid with occasional bleeding on billboards near glass edges. Mitigation was a small dilation and conservative mip bias in the viewer. Good enough, not perfect.

A tiny formula that kept us honest

To choose quantization bits per mesh, we measured a per-axis error bound with

err = bbox_size / (2^bits)

and estimated projected screen error as err * pixels_per_world_unit. If that stayed under 0.5 pixels for the closest planned camera, we called it good. Simple, but it prevented us from “optimizing” into shimmer.

Why we built rather than borrowed (lightly, we promise)

We did test a few off-the-shelf pipelines. They were quick to try, but they fought our goals: they either flattened instancing, ignored spatial chunking, or produced LODs that didn’t respect architectural edges. The custom passes were not glamorous — some days felt like sweeping the same room twice — but they turned a twitchy, 8-second wait into a 2-second glide, which was the client’s real ask.


Takeaway

  1. De-randomize the data first (instancing, transforms, cleanup).
  2. Quantize and LOD with intent so early frames look stable.
  3. Choose compression for the audience: MeshOpt for speed and smooth streaming, Draco for smaller downloads.
  4. Measure the moments that matter (first pixels, camera ready), not just total load.

For our client’s campus viewer, MeshOpt won because it reduced wait time where humans notice it most. Draco still has a seat at the table when bandwidth is scarce. Either way, a thoughtful pipeline makes heavy models feel light — no magic, just a handful of decisions that compound.