Documentation

Getting Started

FluxSpace turns raw field captures into interactive 3D visualizations you can explore in the browser. The pipeline is simple: upload a zip, let the worker process it, then view the results.

  1. Create an account and sign in
  2. Go to Upload and drop your .zip file
  3. Large files (over 50 MB) are automatically uploaded in parts — no action needed
  4. The file is stored securely and a background worker begins processing
  5. Track progress on the Runs page
  6. When processing is complete, open the interactive 3D viewer
  7. Download GLB meshes, logs, or additional exports

What Is a “Run”?

A run represents a single zipped capture folder exported from the FluxSpace capture rig (Raspberry Pi + OAK-D RGBD camera + optional magnetometer). The zip typically contains raw sensor data, images, and metadata from a field session.

run_20260115_1430.zip
├── metadata.json        # session info, timestamps, sensor config
├── images/              # RGBD frames from OAK-D
│   ├── 000001_rgb.png
│   ├── 000001_depth.png
│   └── ...
├── imu/                 # IMU readings (optional)
│   └── imu.csv
└── mag/                 # magnetometer readings (optional)
    └── mag.csv

Upload & Chunking

Navigate to /dashboard/runs/new and drag your .zip file onto the dropzone. Maximum file size is 1 GB.

Supabase Free plan limits individual objects to 50 MB. FluxSpace handles this automatically: files larger than 50 MB are split into <49 MB binary chunks, uploaded as separate parts, and tracked by a manifest. The worker reassembles the original zip before processing.

Storage layout in runs-raw bucket

runs/<runId>/upload/
├── manifest.json          # part list, sizes, original filename
└── parts/
    ├── part_00001.bin     # raw byte chunk 1
    ├── part_00002.bin     # raw byte chunk 2
    └── ...

Resume support: If your browser disconnects mid-upload, re-select the same file. FluxSpace detects the partial upload and offers to resume, skipping completed parts.

Cancel & retry: You can cancel at any time. Completed parts are preserved for when you retry.

Processing Pipeline

After upload, a background Python worker picks up the run and executes the fluxspace-core pipeline. Heavy compute runs outside of the web server on a dedicated worker instance.

1. Ingest

The worker downloads the manifest and all parts from private storage, reassembles them into the original zip, extracts it, and validates the contents.

2. Reconstruct

RGBD frames are fused into a 3D surface mesh. If magnetometer data is present, a magnetic heatmap mesh is generated and aligned to the surface.

3. Export

The worker writes viewer assets (manifest.json, scene.glb, optional heatmap.glb) and any additional export files back to private storage, then marks the run as done.

Outputs

When processing completes, the following assets are available:

  • Viewer assets manifest.json, scene.glb, and optional heatmap.glb loaded by the in-browser 3D viewer
  • Exports folder — any extra output files (point clouds, measurements, reports) placed in the exports directory
  • Pipeline log pipeline.log with timestamped processing details

3D Viewer

The viewer is a Three.js scene that loads GLB models via short-lived signed URLs. No plugins or desktop software required.

  • Orbit controls — rotate, pan, and zoom around the scene
  • Heatmap toggle — show or hide the magnetic overlay
  • Opacity slider — adjust heatmap transparency in real time
  • Reset camera — fit the view to the scene bounds

Status Lifecycle

Each run progresses through a well-defined set of states:

uploaded → queued → processing → exporting → done
                                    ↘                         ↘
                                   failed                    failed
  • uploaded — zip stored (possibly as parts), awaiting trigger
  • queued — worker has been notified
  • processing — reconstruction in progress (progress bar updates live)
  • exporting — writing output files to storage
  • done — viewer and downloads are ready
  • failed — error details shown on the run page

Storage

All data is stored in three private Supabase Storage buckets:

  • runs-raw — uploaded zip parts and manifests
  • runs-processed — viewer assets, exports, and worker output
  • runs-logs — pipeline logs

Buckets are private. All browser access uses short-lived signed URLs generated server-side after ownership verification.

FAQs

What file size limits apply?

Maximum 1 GB per zip upload. Supabase Free plan limits individual objects to 50 MB, so FluxSpace automatically splits larger files into <49 MB parts. Contact support for larger datasets.

What if my upload gets interrupted?

Re-select the same file and FluxSpace will detect the partial upload. Click “Resume” to continue from where you left off — completed parts are skipped.

How long does processing take?

Typical processing time is 2–10 minutes depending on the number of frames and sensor data included in the run.

Do I need a magnetometer?

No. The magnetometer is optional. Without it, you still get a 3D surface mesh from the RGBD data; the heatmap overlay will simply be absent.

Can I download the raw outputs?

Yes. GLB files, export artifacts, and the pipeline log are all downloadable from the run detail page once processing completes.

Local Testing

To test the chunked upload pipeline locally:

  1. Start the dev server: npm run dev
  2. Sign in and navigate to /dashboard/runs/new
  3. Upload a 200 MB+ zip — observe the part-by-part progress indicator
  4. Verify parts in Supabase Storage under runs/<runId>/upload/parts/
  5. Verify manifest.json exists at runs/<runId>/upload/manifest.json
  6. Verify the worker endpoint was called with the runId