中文版

In the world of digital imaging, the classic photo of “Lenna” in her feathered hat serves as the universal benchmark for image processing (for her story, see Lenna 97 (NSFW)). If we were to find its authoritative counterpart in the 3D domain, the Stanford Bunny is undoubtedly the one.

It even has its own Wikipedia page. On Reddit, it’s common to see users rendering or 3D-printing the model, with the comments section inevitably bringing up Lenna (haha). It was also inducted into the 3D Sample Models Hall of Fame by a certain obscure website.

alt text

For the backstory, you can check out the blog post by Dr. Turk, one of its creators. The original dataset can be found here.

This rabbit doesn’t actually have a specific name—“Bunny” is just a common noun for a small rabbit, much like “Nick the Fox.” In academia, it is universally referred to as the Stanford Bunny.

Simply put, the Bunny model originated from a 7.5-inch terracotta rabbit figurine. In 1994 (making it older than me—respect to the “Bunny Senior”), Greg Turk and Marc Levoy at Stanford University used a Cyberware range scanner to capture range data and digitized it into a polygonal mesh consisting of 69,451 triangles. The project aimed to solve the technical challenge of fusing range images from multiple viewpoints into a single, seamless mesh. Due to scanning angle constraints and its hollow structure, the model still contains five holes: two from the physical openings at the base and three from data loss caused by occlusion.

The original figurine and the scanned mesh

How deeply has the Bunny been involved in subsequent 3D research? Let’s just say: “Papers come and go, but the Bunny is forever.” Since there are countless papers featuring the Bunny, I’ve selected a few classics that I’ve personally studied (talk about a double qualifier!).

1. Surface Reconstruction

This field aims to transform discrete 3D sample point sets into continuous topological meshes (such as triangle meshes) or implicit function representations to restore an object’s true geometric shape and surface characteristics. The goal of surface restoration is to turn discrete “points” into a continuous entity with physical properties, geometric topology, and rendering capabilities. This allows computers to truly “understand” and manipulate the object, facilitating subsequent operations like simulation, collision detection, texture mapping, and lighting effects.

1.1 Dense Reconstruction

[SIGGRAPH '94] Zippered Polygon Meshes from Range Images
The “birth certificate” of the Bunny—let’s see where it all began: This paper proposed a Zippering algorithm to merge multiple overlapping range images taken from different angles into a complete, coherent, and seamless triangle mesh model. From this point on, the wheels of fate began to turn.
Bunny2

[TVCG '99] The Ball-Pivoting Algorithm for Surface Reconstruction
The famous—though not always user-friendlyBall-Pivoting Algorithm (BPA) often found in MeshLab: By rolling a sphere of a given radius over the point cloud surface and touching three points at a time, the algorithm automatically finds connections between sample points to rapidly generate a continuous triangle mesh. BPA relies heavily on surface normals to determine the direction of the “ball rotation.”
The Bunny’s geometric features (long ears, curled tail) possess complex local curvatures, making it an excellent candidate for testing the robustness of normal estimation and ball-rotation logic. At the time, the Bunny was one of the few publicly available complete point cloud datasets composed of multiple real-world scans, making it perfect for demonstrating how BPA achieves smooth transitions at the boundaries of different range scans.

[SGP '06, SIGGRAPH '13] Poisson Surface Reconstruction
This paper is a masterpiece that needs no introduction. It transforms the surface reconstruction problem into a global optimization problem by solving the Poisson Equation. By deriving a gradient field from the point cloud, it solves for a smooth scalar indicator function, then extracts a closed triangle mesh with excellent topological properties using its isosurface.
The authors used the Bunny as a benchmark primarily to demonstrate the algorithm’s scalability with large-scale point clouds and its ability to close complex topological structures.
Bunny4

[TOG '22] Stochastic Poisson Surface Reconstruction
This paper proposes a method that treats noisy, sparse, or even incomplete point clouds as random samples. It uses probabilistic modeling to ensure reconstruction results remain stable and plausible even under uncertain data.
The Bunny was used here to demonstrate the algorithm’s ability to quantify uncertainty: it doesn’t just generate a mean field representing the object’s shape, but also uses a variance field to intuitively mark which regions (such as unscannable blind spots or sparse areas) are unreliable. Essentially, it shifts the focus from “single geometry extraction” to “spatial probability field modeling.”
Bunny3

Beyond the dense reconstruction mentioned above, the Bunny is also a frequent guest in mesh simplification and approximation algorithms.

1.2 Mesh Simplification and Approximation

Mesh simplification and approximation aim to significantly reduce the computational, storage, and rendering overhead by decreasing polygon counts or reconstructing geometric proxies, all while preserving the core shape and topological features. Whether through direct triangulation or dense reconstruction, the resulting Bunny meshes often contain excessive redundancy. How to reduce data volume while ensuring structural integrity and topological correctness was a major research focus in the early days.

[SIGGRAPH '97] Surface Simplification Using Quadric Error Metrics
This paper introduced a mesh simplification algorithm based on Quadric Error Metrics (QEM). By iteratively performing edge collapse operations, it uses 4x4 symmetric matrices to efficiently calculate the sum of squared distances from a point to its associated planes. This allows for a massive reduction in face count while maintaining the object’s geometric silhouette and detailed features remarkably well.
The Bunny model was used to demonstrate that the algorithm could preserve key features like the ears and facial details even under high levels of simplification, avoiding the severe shape distortion seen in simpler methods like vertex clustering.
Bunny5

[TOG '04] Variational Shape Approximation
This paper proposed a simplification algorithm based on variational clustering. It partitions complex geometric surfaces into multiple regions represented by best-fitting planes (proxies). Through iterative optimization of region assignments and plane parameters, it achieves a piecewise linear approximation that is closest to the original shape under $L^2$ or $L^{2,1}$ metrics.
The Bunny was used to show that the algorithm can perform high-quality anisotropic simplification on complex freeform surfaces using very few planar primitives. It demonstrates how the clustering process automatically segments the bunny’s ears, limbs, and back into regions that best represent their geometric characteristics, capturing the principal curvature and silhouette even at extremely low face counts.
Bunny6

The papers above both have concrete implementations in CGAL.

1.3 Low-poly Reconstruction

Structured reconstruction (or low-poly reconstruction) aims to extract latent geometric constraints and topological relationships from point clouds. By using a minimal set of planes, primitives, or parametric patches, it builds digital models that are both accurate and highly concise. Since the Bunny is a natural freeform surface rather than a man-made object, it serves as an excellent test case to evaluate the generalization capabilities of these algorithms.

[TOG '20] Kinetic Shape Reconstruction
This work proposed a geometric reconstruction framework based on dynamic space partitioning. It treats extracted original planes as moving objects expanding in space, using Kinetic Space Partitioning to generate adaptive candidate cells. Finally, a Graph Cut optimization extracts a watertight, manifold polygonal mesh.
As a representative of natural surfaces, the Bunny—with its non-convexity and thin structures like the ears—is primarily used to illustrate the impact of the hyperparameter $\lambda$ on the reconstruction result.
Bunny7

[SIGGRAPH '22] Variational Shape Reconstruction via Quadric Error Metrics
This paper presents a point cloud reconstruction algorithm combining variational clustering with QEM. It partitions the point cloud into multiple proxy regions and uses QEM matrices to construct a global energy function, alternating between optimizing region assignments and plane parameters. This achieves high-precision, piecewise smooth reconstruction while suppressing noise. Again, the familiar Bunny serves as the benchmark for natural surface comparison.
Bunny8

1.4 Model Repair

Model repair and completion aim to fill missing holes and correct topological errors in scanned data through geometric inference and topological optimization. The goal is to restore incomplete, noisy raw geometry into a watertight, manifold model that follows design logic. (For more details, refer to the book Polygon Mesh Processing (PMP)). As mentioned earlier, the raw Stanford Bunny scan naturally contains several holes, making it a perfect subject for model repair research.

[SGP '05] Atomic Volumes for Mesh Completion
This paper proposed a structured mesh completion algorithm based on Atomic Volumes. By partitioning space into fundamental cells (atomic volumes) created by intersecting planes, and treating them as nodes in a Markov Random Field (MRF), it uses a Graph Cut algorithm to determine the “inside/outside” status of each cell. This allows the repair of large-scale scan holes while generating a low-poly mesh with perfect manifold topology. The Bunny model was used to prove that the algorithm can handle complex boundaries, non-planar holes, and cases requiring strict topological consistency.
Bunny10

2. Point Cloud Registration

In fact, the Bunny appears in even more papers within this field. Since the original model was synthesized from multiple partial scans, it is naturally suited for point cloud registration research. That said, this field isn’t as “trendy” as it used to be—or rather, 3D vision has become somewhat niche, traditional 3D vision even more so, and registration within traditional 3D is a “niche within a niche within a niche” (the “Triple Niche” theory, if you will). However, many researchers are still persevering; notably, a CVPR 2023 Best Student Paper was recently awarded in this direction. Perhaps the mainstream academic community also hopes to bring more attention back to traditional vision.

[TIP '22] R-PointHop: A Green, Accurate, and Unsupervised Point Cloud Registration Method
This paper proposes R-PointHop, a lightweight and unsupervised registration method. By constructing a multi-level projection structure based on the Saab transform to extract local geometric features, it achieves high-precision alignment through correlations in the feature space. This significantly reduces computational costs and energy consumption while maintaining strong robustness against noise.
The Bunny model is used here to prove that the unsupervised algorithm possesses exceptional cross-dataset generalization on “unseen” categories and robust geometric feature extraction capabilities.
Bunny11

3. Feature Description

Feature description refers to transforming raw 3D coordinates ($x, y, z$) into mathematical expressions (feature vectors) with geometric semantics and discriminative power. This allows computers to recognize, match, and understand 3D objects through “shape features,” much like humans do.

In traditional geometric descriptor research, the Bunny is a favorite primarily due to several irreplaceable characteristics:

  • Asymmetry: The rabbit is not symmetric from any angle, which is crucial for verifying the uniqueness of a descriptor (if it were a sphere or a cube, descriptors would be highly repetitive).
  • Multi-level Details: The ears test the capture of elongated structures and edge features; the back tests the ability to differentiate large smooth surfaces; and the gaps between the legs test the handling of self-occlusion and neighborhood interference.

[IMR '01] Feature Extraction from Point Clouds
This paper proposes a method to identify geometric features (such as edges and corners) by calculating the eigenvalues of the covariance matrix for each point in a point cloud. These features are then used for efficient simplification and non-rigid registration. The paper demonstrates its pipeline using the Bunny model, constructing the point cloud as an adjacency graph to identify “crease points” and “boundary points” (like on the ears and spine) by fitting local planes to detect geometric mutations.
Bunny12

[ICRA '09] Fast Point Feature Histograms (FPFH) for 3D Registration
Dr. Rusu, the author of PFH (and developer of the PCL library), used the Bunny model extensively in his doctoral thesis and early landmark papers to demonstrate Feature Persistence. The paper shows the distribution of FPFH features on the Bunny model (bunny00) under different search radii $r$. Additionally, the Bunny model is partitioned into multiple scans to test the descriptor’s robustness in registration tasks—specifically, recovering the pose of the bunny scans through descriptor matching.
Bunny13

[CVPR '20] Neural Implicit Embedding for Point Cloud Analysis
This paper introduces a Neural Implicit Embedding method that learns a continuous implicit function to project discrete point clouds into a high-dimensional feature space. This overcomes the sensitivity of traditional methods to sampling density and noise. The paper demonstrates how to use ELM (Extreme Learning Machine) to fit the Distance Field around the Bunny point cloud. Through this model, the authors prove that even with discrete input, neural implicit embeddings can capture the Bunny’s complex surface details via weight parameters.
Bunny14

4. Shape Retrieval

This field is a bit too outdated, so I’ve removed it.

The above are the papers I’ve encountered where the Bunny appears in traditional 3D vision and machine learning. Now, let’s look at some Bunny-related papers in the realm of neural computing. I didn’t expect the Bunny to still be “carrying the team” (still the GOAT!) in modern neural computing—haha.

5. Data Representation

Data representation defines the mathematical and logical methods for describing, storing, and indexing 3D geometric shapes. The choice of representation directly determines the algorithm’s efficiency, numerical precision, and the topological complexity it can express.

Choosing between different representations is essentially a trade-off across three dimensions:

  • Storage Overhead: The memory or VRAM capacity required to describe geometric details.
  • Computational Speed: Response time for spatial queries, geometric transformations, and Boolean operations.
  • Expression Accuracy: The degree of restoration for complex surfaces, sharp features, and topological structures.

Representation Category Typical Examples Core Definition Advantages Limitations
Explicit Point Clouds, Triangle Mesh Directly records discrete coordinate information of the surface. High rendering efficiency, intuitive, hardware-accelerated. Difficult to perform complex Boolean operations or handle dynamic topology changes.
Implicit TSDF, Signed Distance Field Indirectly defines the surface via a scalar function $f(x, y, z) = 0$. Easy to handle thin structures and topological fusion; good for watertight repair. Cannot be rendered directly; requires conversion via algorithms like Marching Cubes.
Volumetric Voxel Grid Partitions space into regular grid cells. Extremely fast spatial queries; suitable for collision detection and physics simulation. Storage needs grow cubically with resolution; prone to discretization errors.

[TOG '10] Fast Parallel Surface and Solid Voxelization on GPUs
This paper proposed a GPU-accelerated fast parallel surface and solid voxelization algorithm. By efficiently partitioning 3D geometric meshes into voxels and leveraging the massive parallelism of GPUs for rapid inside/outside testing and intersection calculations, it achieves high-precision voxelization of complex models in seconds. The Bunny model is used for baseline performance testing and to demonstrate the visual differences between conservative voxelization and 6-separable voxelization.
Bunny15

[IROS '19] Directional TSDF: Modeling Surface Orientation for Coherent Meshes
This work introduced an orientation-aware enhancement for Truncated Signed Distance Fields (TSDF). By incorporating surface normals (orientation) as an additional constraint within the traditional scalar distance field, it solves the “surface bleeding” issue—a common problem where traditional TSDF fails to distinguish between thin structures or close-range parallel surfaces at low voxel resolutions. This leads to more topologically accurate and clearer coherent meshes.
The authors used the Bunny to show how Directional TSDF maintains clear surface boundaries even with single-sided observations or thin features, by storing data independently across six directions.
Bunny16

6. Texture Mapping

Texture Mapping is the technique of mapping image information (color, texture, normals, etc.) onto the surface of a 3D model. If a 3D model is a “raw wooden carving,” texture mapping is the process of “applying a colored skin” or “carving in the details.” Modern rendering goes beyond just “pasting a photo”; it simulates real physical properties through various mapping channels: Albedo, Normal, Roughness/Metallic maps, etc.

[SIGGRAPH Asia '25] PartUV: Part-Based UV Unwrapping of 3D Meshes
This paper proposes a part-based UV unwrapping algorithm. By decomposing complex 3D models into parts with simple geometry and semantic consistency, and performing independent parameterization optimization within each part, it achieves UV layouts with significantly reduced distortion and higher packing efficiency. The Bunny serves as a component of the test dataset—nothing too flashy here.

7. Neural Computing and Geometric Rendering

Neural computing refers to using neural networks (such as Multi-Layer Perceptrons, MLPs) as function approximators to encode 3D geometry into continuous mathematical fields, breaking through the storage and precision limits of traditional discrete meshes. Geometric rendering, on the other hand, involves transforming these neural representations into visual images using differentiable mathematical operators (such as ray-intersection or Gaussian projection), enabling the observation and optimization of physical properties.

[CVPR '19] DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation
This paper proposed a learning framework based on continuous implicit representations. By encoding 3D shapes as Signed Distance Functions (SDF) within deep neural networks, it utilizes latent space to achieve compact compression, efficient interpolation, and high-precision surface reconstruction from sparse or noisy inputs.
The Bunny serves as a crucial qualitative example, visually demonstrating how DeepSDF represents the continuous volumetric field of a single shape. Its complex curvature changes highlight DeepSDF’s superiority in handling fine details and closed surfaces, proving that neural networks can learn a fully continuous scalar field rather than just discrete voxels.
Bunny17

[ECCV '20] Points2Surf: Learning Implicit Surfaces from Point Clouds
This paper introduced a data-driven reconstruction method called Points2Surf. By combining local patch features with global coordinate information, the network learns a non-linear mapping from raw point clouds to SDFs. This enables robust, continuous implicit surface extraction for complex shapes without requiring surface normals.
Bunny18

[TOG '20] Neural Subdivision
This work proposed a data-driven mesh subdivision framework. By using neural networks to learn the geometric evolution rules of local meshes—replacing traditional linear subdivision rules like Catmull-Clark—it can automatically predict and restore non-linear details and sharp features while increasing model resolution. The Bunny is featured as a core example in the algorithm’s workflow diagram to illustrate the evolution from coarse to fine during training and inference.
Bunny19

[CVPR '22] CoNeRF: Controllable Neural Radiance Fields
This paper presented a controllable Neural Radiance Field (NeRF) framework. By introducing attribute masks and control signals into NeRF, it achieves high-fidelity rendering while allowing for precise geometric deformation, attribute editing, and interactive control over specific regions or semantic parts. The authors provided a codebase based on Kubric to generate the synthetic datasets used in the paper, which includes the Bunny.
Bunny20

[RAL '23] Differentiable Physics Simulation of Dynamics-Augmented Neural Objects (DANO)
DANO proposes a framework that couples neural implicit representations (like NeRF/SDF) with differentiable physics simulations. By assigning physical properties (such as mass and elasticity) to neural geometric objects and utilizing a differentiable dynamics solver, it enables highly realistic physical interaction simulation, parameter identification, and inverse motion optimization directly on neural representations.
The Bunny, as a complex non-convex geometry, demonstrates how an appearance-only NeRF can be transformed into a physics-aware DANO object. It also showcases the ability to handle multi-point contact and collisions on non-smooth surfaces without generating complex triangle meshes.
Bunny21

[CVPR '23] Pointersect: Neural Rendering with Cloud-Ray Intersection
Pointersect introduces a neural rendering algorithm based on point-cloud-ray intersection. By utilizing a differentiable “point-ray” intersection operator, the network can perform accurate visibility determination and geometric reasoning directly on raw point clouds. This eliminates the need for pre-processing meshes or voxels, achieving high-fidelity, multi-view consistent neural interpolation rendering. The Bunny is a key member of the test set.

[NeurIPS '24] Subsurface Scattering for 3D Gaussian Splatting
This paper proposes a 3D Gaussian Splatting (3DGS) rendering algorithm enhanced with Subsurface Scattering (SSS). By introducing physics-aware translucent material parameters and light transport models into traditional 3D Gaussians, it simulates multiple light scattering within an object. This achieves high-fidelity visual restoration of materials like skin, jade, and wax while maintaining 3DGS’s real-time advantage. The Bunny model is a core synthetic case used to quantitatively evaluate the decoupling and reconstruction of SSS effects, showcasing the rendering results of wax-like or marble-like bunnies under varying light conditions.
Bunny23

[IROS '24] Touch-GS: Visual-Tactile Supervised 3D Gaussian Splatting
This paper presents a 3DGS reconstruction framework supervised by both visual and tactile data. By incorporating fine local geometry information from tactile sensors (like DIGIT) as supplementary constraints, it solves the geometric inaccuracies traditional 3DGS faces when dealing with transparent, highly reflective, or occluded areas. Since the authors are from Stanford University, literally every image in this paper features the Bunny. It’s a “Bunny Dynasty” (haha)!
Bunny24

8. Others

[GM '19] Near Support-free Multi-directional 3D Printing via Global-optimal Decomposition This paper proposes a model decomposition algorithm for multi-axis 3D printing that is nearly support-free. By constructing a globally optimal decomposition framework, complex geometries are partitioned into sub-parts with specific printing directions. This maximizes the self-supporting properties of each component while significantly reducing or even eliminating auxiliary support structures. The Bunny is a perfect candidate for 3D printing “stress tests,” and unsurprisingly, this paper is almost entirely filled with Bunny examples.
Bunny25


Afterword

How did such a strange article come to be?

Earlier this year, during a long flight back to China, I found myself bored and started searching through files on my laptop. When I typed “Bunny” into the search bar, I was shocked to find hundreds of hits. I knew there would be many, but I didn’t expect the volume to be this massive—especially since I had cleared my hard drive before going abroad, moving most of my papers to the cloud and keeping only a fraction locally.

As I looked back at my relatively short research career, I realized that the Bunny truly is everywhere. It felt like a serendipitous connection, as if the timing was just right to document its place on my timeline. I finished the draft of this article during the remaining five or six hours of the flight. Honestly, there was no great difficulty involved; it was mostly a labor of Ctrl+C and Ctrl+V, but it was a great way to pass the time. Yes, most of these words were written at 30,000 feet. After landing, I added some links and toned down a few of my more… aggressive descriptions (haha).

When I first started writing, I thought about mimicking the style of George Gamow’s Mr Tompkins in Wonderland—a book I loved in middle school (though I never ended up pursuing physics). The protagonist, Mr. Tompkins, enters fantastic worlds through dreams or everyday scenarios where physical laws are exaggerated or altered, explaining relativity, quantum mechanics, and atomic structure in a simple, engaging way. I thought it would be clever to enter the 3D world from the Bunny’s perspective, participating in all the research mentioned above. However, that idea lasted only a minute before I dismissed it. I don’t have the literary prowess or the time for that, nor do I have enough research depth yet. Better not to make a fool of myself trying to imitate a masterpiece.

After getting home, I sent the draft to a few classmates. Their reaction was unanimous: “You’re actually writing a biography for a rabbit?!” (Haha). They probably aren’t in the 3D field and don’t understand the “weight” this rabbit carries.

Why do we love the Bunny so much? (Many of the figures above could have used other models). Beyond its inherent 3D qualities that perfectly suit various tasks—though not all; the Bunny rarely appears in image-based tasks like SfM, MVS, or V-SLAM, at least in my limited reading—there is the “bellwether effect.” It was there at the very beginning, likely long before you even entered the field. When you see the Bunny in a paper, even without a citation, everyone knows exactly what it is. It’s like an old friend appearing in the middle of a dry research life:

“Hi there, it’s Bunny.”