🌌 π³: Scalable Permutation-Equivariant Visual Geometry Learning
🐙 GitHub Repository | 🚀 Project Page
Transform your videos or image collections into detailed 3D models. The π³ model processes your visual data to generate a rich 3D point cloud and calculate the corresponding camera perspectives.
How to Use:
- Provide Your Media: Upload a video or image set. You can specify a sampling interval below. By default, videos are sampled at 1 frame per second, and for image sets, every image is used (interval of 1). Your inputs will be displayed in the "Preview" gallery.
- Generate the 3D Model: Press the "Reconstruct" button to initiate the process.
- Explore and Refine Your Model: The generated 3D model will appear in the viewer on the right. Interact with it by rotating, panning, and zooming. You can also download the model as a GLB file. For further refinement, use the options below the viewer to adjust point confidence, filter by frame, or toggle camera visibility.
A Quick Note on Performance: The core processing by π³ is incredibly fast, typically finishing in under a second. However, rendering the final 3D point cloud can take longer, depending on the complexity of the scene and the capabilities of the rendering engine.
1. Upload Media
2. View Reconstruction
Please upload media and click Reconstruct.
3. Adjust Visualization
0 100
Show Points from Frame
Click any row to load an example.
Examples
Upload Video | Frame/Image Interval | Confidence Threshold (%) | Show Cameras |
---|