Rising from the waves of San Francisco Bay, Alcatraz Island is one of the most iconic historical sites in the United States. Best known as a maximum-security prison that once held notorious inmates like Al Capone and Robert Stroud, the “Birdman of Alcatraz," the island also served as a military fortification and, later, as a symbol of Native American activism during the 1969–71 occupation. Today, it's a protected site managed by the National Park Service, attracting over a million visitors annually.
But Alcatraz, also known by its nickname “The Rock,” is more than a tourist attraction—it's a living piece of history. Like many culturally significant landmarks exposed to the elements, time is its greatest adversary. That’s why Pete Kelsey, founder of VCTO Labs and a longtime advocate for digital preservation, led a groundbreaking project to create the most comprehensive 3D model ever made of Alcatraz. The goal: establish a digital baseline to study the future impacts of sea-level rise, erosion, and seismic activity on the island.
To digitize the exterior of The Rock, Pete combined multiple sensing technologies, including high-resolution photogrammetry, multispectral imagery, and aerial LiDAR. Due to the sheer volume of data and poor internet connectivity on the island, cloud processing wasn't an option—the data had to be processed on-site immediately after each scan. This meant that Pete and the team needed to ensure that all data was captured and usable before departing from Alcatraz.
Unlocking the digital future of Alcatraz with RealityScan 2.0
Some of the data was processed directly in RealityScan (formerly known as RealityCapture). “I knew I wanted to use this because it’s one of the only products I know about that can integrate both LiDAR and photogrammetry data into a single model," Pete recalls. “I’ll never forget that day in the office at Alcatraz, with RealityScan crunching away on our capture data—probably the photogrammetry.”
At the time, RealityScan supported only the combination of photogrammetry with terrestrial LiDAR. Pete contacted our team to see if we could help merge the drone imagery with aerial LiDAR scans. Over at RealityScan, we were excited to contribute—aerial LiDAR support was already in development, and the Alcatraz project became the perfect real-world test case for RealityScan 2.0. Thanks to this collaboration, RealityScan now officially supports the combination of photogrammetry with both terrestrial and aerial LiDAR.
Processing the data in RealityScan 2.0
Pete provided us with a survey network of 62 ground control points, 2,805 drone-captured images, and the aerial LiDAR scan. The photogrammetry data and ground control points were processed using the standard workflow: all images were imported into a RealityScan project and aligned. After initial alignment, we imported the ground control points, marked them in the images, disabled the less accurate GPS metadata from the drone, and re-ran the alignment to optimize the result.
Since we had 2,332 aligned images in a single component, reusing those would have been overkill. Instead, we chose Generate aerial poses, which created a regular grid of virtual cameras above the point cloud.
Because the point cloud was very dense, the .lsp files looked like actual photographs, making ground control point marking easy and accurate.
We marked several GCPs in the intensity channel, which proved more precise than the color channel due to typical color shifts in aerial LiDAR.
After GCPs were marked, we re-aligned the project and achieved a single component combining both photogrammetry and aerial LiDAR data.
The best of both worlds: LiDAR for geometry, photogrammetry for textures
With the dataset aligned, we used the aerial LiDAR data to reconstruct the mesh and the photogrammetry to generate high-resolution textures. The results were impressive. The LiDAR-based mesh reconstruction produced over 200 million polygons, and RealityScan generated twenty-one 8K textures for maximum detail.
Certain surfaces—such as the roof structures, edges, and areas with minimal texture—were significantly better captured with LiDAR than with photogrammetry alone. The hybrid workflow unlocked the strengths of both technologies.
The setup in Unreal Engine was also straightforward. We enabled the Water plugin to add the ocean surrounding the island and installed the Cesium for Unreal plugin to bring in real-world geospatial data. With Cesium, we added the surrounding areas, including landmarks like the Golden Gate Bridge, using Google Photorealistic 3D Tiles.
Bringing the past into the future
Alcatraz has stood as a silent witness to some of America’s most turbulent times. Today, thanks to Pete Kelsey, his extended team, and the RealityScan team, it also stands as a monument to what's possible when cutting-edge technology meets cultural preservation.
The Alcatraz project shows how LiDAR and photogrammetry can complement one another—and how the RealityScan 2.0 pipeline, together with the Unreal Engine ecosystem, empowers creators to digitize, preserve, and share history like never before.
As we look to the future, our mission is clear: Be the cornerstone of 3D content creation, becoming the indispensable tool for creators to bring the real world into the photoreal metaverse, by offering the most advanced, intuitive, and reliable scanning software on the market.
Because when you can scan reality, you can preserve it. You can understand it. And most importantly, you can share it.