Alcatraz visualization rendered in Twinmotion 2025.1.
Courtesy of Pete Kelsey, VCTO Labs

Spotlights

June 16, 2025

Digitizing The Rock: How Pete Kelsey and RealityScan 2.0 brought Alcatraz into the Epic ecosystem

When VCTO Labs set out to create the most comprehensive 3D model ever of Alcatraz Island, they chose RealityScan to generate a stunning and detailed digital replica from aerial LiDAR, photos, and Cesium tiles.

Alcatraz

Architecture

Twinmotion

VCTO Labs

Visualization

Rising from the waves of San Francisco Bay, Alcatraz Island is one of the most iconic historical sites in the United States. Best known as a maximum-security prison that once held notorious inmates like Al Capone and Robert Stroud, the “Birdman of Alcatraz," the island also served as a military fortification and, later, as a symbol of Native American activism during the 1969–71 occupation. Today, it's a protected site managed by the National Park Service, attracting over a million visitors annually.

But Alcatraz, also known by its nickname “The Rock,” is more than a tourist attraction—it's a living piece of history. Like many culturally significant landmarks exposed to the elements, time is its greatest adversary. That’s why Pete Kelsey, founder of VCTO Labs and a longtime advocate for digital preservation, led a groundbreaking project to create the most comprehensive 3D model ever made of Alcatraz. The goal: establish a digital baseline to study the future impacts of sea-level rise, erosion, and seismic activity on the island.
Aerial view of 3D scanned Alcatraz Island
Courtesy of Pete Kelsey, VCTO Labs
To digitize the exterior of The Rock, Pete combined multiple sensing technologies, including high-resolution photogrammetry, multispectral imagery, and aerial LiDAR. Due to the sheer volume of data and poor internet connectivity on the island, cloud processing wasn't an option—the data had to be processed on-site immediately after each scan. This meant that Pete and the team needed to ensure that all data was captured and usable before departing from Alcatraz.

Unlocking the digital future of Alcatraz with RealityScan 2.0

Some of the data was processed directly in RealityScan (formerly known as RealityCapture). “I knew I wanted to use this because it’s one of the only products I know about that can integrate both LiDAR and photogrammetry data into a single model," Pete recalls. “I’ll never forget that day in the office at Alcatraz, with RealityScan crunching away on our capture data—probably the photogrammetry.”

At the time, RealityScan supported only the combination of photogrammetry with terrestrial LiDAR. Pete contacted our team to see if we could help merge the drone imagery with aerial LiDAR scans. Over at RealityScan, we were excited to contribute—aerial LiDAR support was already in development, and the Alcatraz project became the perfect real-world test case for RealityScan 2.0. Thanks to this collaboration, RealityScan now officially supports the combination of photogrammetry with both terrestrial and aerial LiDAR.

Processing the data in RealityScan 2.0

Pete provided us with a survey network of 62 ground control points, 2,805 drone-captured images, and the aerial LiDAR scan. The photogrammetry data and ground control points were processed using the standard workflow: all images were imported into a RealityScan project and aligned. After initial alignment, we imported the ground control points, marked them in the images, disabled the less accurate GPS metadata from the drone, and re-ran the alignment to optimize the result.
Alcatraz photogrammetry data aligned in RealityScan
Courtesy of Pete Kelsey, VCTO Labs
Next, we imported the aerial LiDAR data using the Import LiDAR Scan tool in the Workflow tab. RealityScan automatically recognized the dataset as aerial LiDAR. During import, RealityScan generated virtual cameras to render the laser scan, resulting in .lsp files that can be used for alignment and mesh generation, just like with terrestrial LiDAR.

Three options are available for generating virtual cameras:
  • From camera pose priors
  • From component
  • Generate aerial poses
The From camera poses priors option reuses the prior georeferencing of the images to create the virtual cameras. If you have an existing camera alignment, you can use the From component option to use the already aligned cameras to render the laser scan point cloud. The Generate aerial poses option generates the cameras in a regular grid above the point cloud. This option is especially useful if you want to process only aerial LiDAR data and you don’t have any photogrammetry data.
Since we had 2,332 aligned images in a single component, reusing those would have been overkill. Instead, we chose Generate aerial poses, which created a regular grid of virtual cameras above the point cloud.
Alcatraz aerial LiDAR data in RealityScan
Courtesy of Pete Kelsey, VCTO Labs
Because the point cloud was very dense, the .lsp files looked like actual photographs, making ground control point marking easy and accurate.
Alcatraz generated .lsp file of the aerial LiDAR scan
Courtesy of Pete Kelsey, VCTO Labs
We marked several GCPs in the intensity channel, which proved more precise than the color channel due to typical color shifts in aerial LiDAR.
Marked ground control points on the intensity channel of an .lsp file.
Courtesy of Pete Kelsey, VCTO Labs
After GCPs were marked, we re-aligned the project and achieved a single component combining both photogrammetry and aerial LiDAR data.
Photogrammetry and LiDAR data aligned together in RealityScan.
Courtesy of Pete Kelsey, VCTO Labs
The best of both worlds: LiDAR for geometry, photogrammetry for textures

With the dataset aligned, we used the aerial LiDAR data to reconstruct the mesh and the photogrammetry to generate high-resolution textures. The results were impressive. The LiDAR-based mesh reconstruction produced over 200 million polygons, and RealityScan generated twenty-one 8K textures for maximum detail.

Certain surfaces—such as the roof structures, edges, and areas with minimal texture—were significantly better captured with LiDAR than with photogrammetry alone. The hybrid workflow unlocked the strengths of both technologies.
 
Alcatraz high-detail photogrammetry reconstruction.
Courtesy of Pete Kelsey, VCTO Labs
Alcatraz aerial LiDAR reconstruction.
Courtesy of Pete Kelsey, VCTO Labs
To assist this project, AMD made available a powerhouse workstation featuring the 96-core Threadripper Pro 7995WX CPU. "I brought up Task Manager and saw 96 blue squares, all running at 100% at 4.7 GHz. I don’t mind admitting that I squealed with excitement," Kelsey said.

Even more impressive were the performance metrics:
  •     Mesh calculation from aerial LiDAR: 13 minutes 48 seconds
  •     Photogrammetry normal detail reconstruction: 1 hour 38 minutes 55 seconds
  •     Photogrammetry high detail reconstruction: 7 hours 6 minutes 22 seconds

From scan to scene: Visualizing Alcatraz in Unreal Engine and Twinmotion

Since Epic Games develops powerful rendering tools such as Unreal Engine and Twinmotion, it was only natural to visualize the Alcatraz scan. And the result was breathtaking.

Alcatraz, situated in the middle of San Francisco Bay, was a perfect subject. After importing the high-resolution mesh into Twinmotion, in the Ambience options we simply added an ocean plane and volumetric clouds, and used built-in assets to enrich the environment. Within minutes we had a real-time, photorealistic flythrough of the island. 
 
Alcatraz visualization rendered in Twinmotion 2025.1.
Courtesy of Pete Kelsey, VCTO Labs
Alcatraz visualization rendered in Twinmotion 2025.1.
Courtesy of Pete Kelsey, VCTO Labs
The setup in Unreal Engine was also straightforward. We enabled the Water plugin to add the ocean surrounding the island and installed the Cesium for Unreal plugin to bring in real-world geospatial data. With Cesium, we added the surrounding areas, including landmarks like the Golden Gate Bridge, using Google Photorealistic 3D Tiles. 
Alcatraz scan in Unreal Engine 5.6.
Courtesy of Pete Kelsey, VCTO Labs
Bringing the past into the future

Alcatraz has stood as a silent witness to some of America’s most turbulent times. Today, thanks to Pete Kelsey, his extended team, and the RealityScan team, it also stands as a monument to what's possible when cutting-edge technology meets cultural preservation.

The Alcatraz project shows how LiDAR and photogrammetry can complement one another—and how the RealityScan 2.0 pipeline, together with the Unreal Engine ecosystem, empowers creators to digitize, preserve, and share history like never before.

As we look to the future, our mission is clear: Be the cornerstone of 3D content creation, becoming the indispensable tool for creators to bring the real world into the photoreal metaverse, by offering the most advanced, intuitive, and reliable scanning software on the market.

Because when you can scan reality, you can preserve it. You can understand it. And most importantly, you can share it.
 

Download RealityScan

RealityScan is free to use for students, educators, and individuals and companies with an annual gross revenue below $1 million USD.

Above the $1M threshold? Visit our licensing page to find out about your purchasing options.

Download the launcher

Before you can install and run RealityScan, you’ll need to download and install the Epic Games launcher.

Install Epic Games Launcher

Once downloaded and installed, open the launcher and create or log in to your Epic Games account.

Having trouble? Get support, or restart your Epic Games launcher download in Step 1.

Install RealityScan

Once logged in, navigate to the RealityScan tab in the Unreal Engine section and click the install button to download the most recent version.

Looking for RealityScan Mobile?

Grab your phone and start scanning.

Find out more