Blog RSS Feed

Archive for February, 2026

Recreating Richard Cleave’s 1993 Holy Land Satellite View

Monday, February 23rd, 2026

In 1993, Richard Cleave (R. L. W. Cleave) wrote The Holy Land: A Unique Perspective, which to my knowledge (and as the book jacket says) represents the first time satellite imagery was directly used as a base layer for Bible maps. He writes that his source is a Landsat 5 image from January 18, 1987: “a cold, exceptionally clear and almost cloudless morning: the best of all possible mornings for a single contemporary image of the whole area.” He uses this image throughout the book and for his two-part Holy Land Satellite Atlas in 1999, which in turn serves as the basis for the NET Bible Maps (2003).

The U.S. government makes decades of Landsat imagery available, so I was curious whether it was possible to approximate Cleave’s classic look using modern methods. The answer is, “Yes, mostly”:

An attempt to match the look of Cleave's satellite imagery from 1999. This image stretches from Mount Hermon to the northern tip of the Gulf of Aqaba, and from near Gaza City to just past Damascus.
Also available as a Cloud Optimized Geotiff (40 MB) for GIS purposes and a KMZ (80 MB) for Google Earth. Both these larger images include the Sinai peninsula, though I believe Cleave used a different source image and composite method in his books for that region.

If you’ve worked with satellite imagery, you know that the data comes in “bands”—in this case, there are red, green, and blue bands—that you combine to make a final image. The decisions you make when combining these bands dramatically affect the look of the output, and there’s no objectively correct answer. I tried to come close to Cleave’s decisions from the early 1990s, but my water ended up darker and my highlights ended up brighter than his. It has a similar feel, though, down to the purple tones south of the Dead Sea. Making my matching life harder, the print colors of Cleave’s image vary depending on the book, which suggests either printing variations or multiple rendering refinements. So I tried to capture the character of the original, but it’s more of an interpretation than a copycat.

About Richard Cleave

Cleave himself sounds like a fascinating fellow. Robert North in A History of Biblical Map Making describes him in 1979: “Dr. R. L. W. Cleave of the British Navy, after serving hospitals in Jordan and becoming concerned with the lack of aerial survey material of the Holy Land, resigned his commission to accept the offer to prepare a pictorial archive for a Time-Life project. When the 1967 war intervened, he was limited to working inside Israel, and with the guidance of Père Jean Prignaud of the École Biblique he prepared and published 1500 aerial views of all major archeological and geographical features of Cisjordan. To these have already been added some 500 more views of Sinai, Göreme, and some other sites mostly in Turkey” (p. 142).

His photos consistently appeared in Bible reference works from 1967 through the late 2000s and remain high quality even compared to today’s imagery—especially since they capture a world from 60 years ago. His aerial view of the City of David represents, to me, one of the clearest ever captured. Compare a similar perspective from 2014, which shows many more buildings and is harder to parse at a glance.

Cleave worked with James Monson to produce the Student Map Manual in 1979. The ambition described in this book’s preface is astonishing for the time. Cleave’s “Wide Screen Project” describes an entire geographically indexed multimedia learning system: for the audio, cassettes; for the visual, audio-synchronized slides plus maps; for learning, the Student Map Manual, guided tours, and a poster exhibit. This proposed learning system provides a practical use for his library of thousands of photos.

In 1993, he combined 149 of these photos along with the aforementioned satellite view into The Holy Land: A Unique Perspective. The afterword to this book is also ambitious: he describes the now-common (thanks to Google Earth) practice of draping satellite imagery over a digital elevation model to produce a 3D view.

Cleave and his son Adrian worked with “John K. Hall of the Israel Geological Institute, and Gennady Agranov and Craig Gotsman, computer scientists at the Technion, Haifa” to produce these 3D images, which would premiere in National Geographic‘s June 1995 issue (“Satellite Revelations: New Views of the Holy Land“) and later form the core of 1999’s The Holy Land Satellite Atlas as part of RØHR Productions, Ltd. (Nicosia, Cyprus). In this 3D imagery, he uses SPOT panchromatic data to add detail (similar Landsat 7’s panchromatic band).

This work required an international team in 1993; today you can (approximately) recreate it on a home computer. Including satellite imagery in Bible maps has become somewhat more common but remains unusual. Some of Tyndale’s current maps use a subtle satellite background. The Satellite Bible Atlas (2013) relies on satellite imagery for its whole premise. The Casual English Bible maps use 3D satellite images.

But Cleave wasn’t just thinking 3D in 1993; by adding a time dimension, he was thinking 4D:

Rohr Productions is now preparing a 2 1/2 hour videotape of 3D satellite animation, specifically designed for use with this atlas. This will have a 20 minute Introduction and 13 Regional Segments, each of approximately 10 minutes duration…. The spoken commentary in the video will be descriptive, designed to reinforce the regional commentary printed in the book.

Relevant low-level aerial photographs (selected from the book) will be inserted into the “flight path,” providing familiar details of the major Biblical/historical sites and geographical features, each presented in its appropriate regional context.

Therefore all three of the most important elements in the atlas will be fully represented in the videotape: viz. the regional commentary, satellite imagery and low-level aerial photography. The videotape will provide optimal visualization and the book optimal documentation. To be fully effective, both systems are necessary.

This system anticipates multimedia accompaniments to books. He also describes using a CD-ROM to provide interactivity in a way that didn’t become popular until thirteen years later, with Google Earth’s release in 2005. The technology that underlies Google Earth didn’t even exist until 1999, at least six years after he wrote this paragraph:

In the case of the above videotape of simulated flights over the Holy Land, the actual flight paths have been predetermined for use in conjunction with the regional satellite maps in the atlas. Thus the viewer cannot alter these animation sequences in any way. Such personal intervention or “interactivity” is only possible if the 3D satellite data is supplied in digital format (on CD-ROMs), for use on the computer. Such use is already possible, of course, but only on the more powerful graphic work stations. We must still wait for comparable processing power and storage capacity in the PC world to provide this interactive option to a much wider group of Bible students, but it cannot be more than a few years away!

Cleave would ultimately produce this software. You can see some videos of a later version of it in use on YouTube. The effect is similar to Google Earth’s “tour” feature (which, again, came out more than a decade later). Here’s my recreation of the effect in Google Earth using the above image.

In all these cases—from aerial photos to multimedia education to satellite imagery to 3D views to 4D presentations to interactive explorations—Cleave saw the technological possibilities of the time and explored what they could mean for students of Bible geography.

What happened to the thousands of photos that Cleave took in the 1960s, though? Based on the hundreds he printed in his books and licensed to others, they’re very high quality and are an important historical record. Some of his posters and 3D satellite imagery remain available online (for now) in low-resolution forms, but I couldn’t find a repository of his photos. Maybe they live on as slides in a collection somewhere, waiting to be digitized and made more widely available. Until then, you can buy his books used or browse some of them on the Internet Archive.

Last Week, an LLM Out-Programmed Me

Sunday, February 8th, 2026

With last week’s release of Codex 5.3 and Opus 4.6, I had a new experience: an LLM showed itself to be a better programmer than I am. If you’ve seen my code, you may not think that’s a big achievement. But for the first time I saw, practically, how an AI could outperform me at something I take some measure of pride in. It was like Google’s Nano Banana Pro moment, but for coding.

Unlike my previous experiences with LLM coding, Codex 5.3 didn’t just have more familiarity with the syntax of a language or the functionality of a module; it solved an architectural problem better than I did. (It reused existing file artifacts instead of creating intermediate files.) Likely it had pulled the architectural pattern from somewhere else, but it was an elegant solution—superior to the workable-but-basic approach I’d been planning. In that instant, I felt like the future had arrived in a small way: it was better at this task than I was, not just faster at it.

LLMs have let me compress weeks of coding work into a few days. For the Bible Passage Reference Parser, I normally follow a six-month release schedule because changes take a lot of time, especially big refactoring changes like I’ve been planning for the next version (which moves language data to a different repo and adds an additional 2,000 languages). I’d been dreading this work for years because, with so many languages, dealing with exceptions would consume the bulk of the coding effort. I could barely manage exceptions with the 40 languages in the current repo, so adding 50x more didn’t sound fun.

However, Codex 5.3 made short work of the task, taking a few minutes to accomplish what would’ve taken me days of dedicated work, not that I’d ever be able to dedicate days straight to this project. I published the latest branch five months ahead of schedule (and remember, the schedule is six months long).

These models still make mistakes; you can’t yet let them code unattended. But their ability to plan ahead and write code according to that plan is now (at least sometimes) stronger than mine. A year ago, converting the reference-parser code from Coffeescript to Typescript involved a bunch of back-and-forth with ChatGPT; even with a straight 1:1 conversion, it still made questionable decisions that I corrected. With the latest models, LLMs are now correcting my questionable decisions.

Synthetic Satellite-Based Coloring for Historical Maps using Gaea 2

Monday, February 2nd, 2026

In 2018, I wrote about using terrain-generation software to make historical maps, with synthetic coloring to generate what look like satellite photos with modern features removed (cities, roads, agriculture, etc.).

This post expands on the earlier one, creating synthetic satellite coloring at scale. When combined with the hillshading and vegetation techniques I discussed recently, it produces credible synthetic map backgrounds down to scales of about 1:125,000 (30m per pixel). With higher-resolution hillshades and vegetation data, it’s credible to about 10m per pixel.

Here’s an example of this technique used in a zoomed-out view, compared to a satellite view of the same area. Both views have hillshading and vegetation layers added.

A side-by-side view of a synthetic satellite view of Israel and a real satellite view.
I don’t know why there are some random vertical and horizontal lines that look like graticules. They only show up when I export from QGIS.

The synthetic and satellite views look pretty close; the synthetic view depicts a more idealized view of the terrain with fewer drainage lines (note especially the southeastern corner) and less extreme color variations (for example, the orange area in the south, east of the Red Sea, is visible but less intense).

Here’s a zoomed-in area (1:250,000 scale) near the Dead Sea, again overlaid with hillshading and vegetation:

A side-by-side view of a synthetic and real satellite image of an area near the Dead Sea.

Zoomed in, the colors feel too uniform to me. There’s a decent amount of detailing when you zoom in even further, but it doesn’t read at this scale. I’m OK with it appearing a bit more maplike here because the color variations aren’t necessarily significant; I don’t want to distract viewers with unimportant detail. But I could maybe draw out the highlights a bit more.

See the third and fourth images in this post for an even-more-zoomed-in view.

Methodology

  1. Acquire medium-resolution satellite reflectance data for the area in question. I used 10m Sentinel-2 data I had from 2021’s Bible Atlas project. This data came from from Sentinel Hub, but today I might use an annual or quarterly mosaic from Copernicus. NASA’s 30m Harmonized Landsat-Sentinel data is another potential data source.
  2. Mask any pixels with modern development or forest cover using the Global Land Cover dataset from the University of Maryland (2020).
  3. Create an 8,192×8,192-pixel tile of the desired area.
  4. Blur the tile to fill in missing pixels and prevent any remaining modern pixels from leaking into the image.
  5. Create an elevation tile of the same area (normalizing the elevation values to 0-1). I used GEDTM30.
  6. Pull the colors and elevation into Gaea 2 (a terrain-generation app) and use the Color Erosion tool to create plausible color flows to add detail. This process took about ten minutes per tile on my PC.
  7. Add geodata to Gaea 2’s output.
  8. Move onto the next tile, with a 1,024-pixel overlap to allow smoothing between tiles.

This method automates well; I used it to generate fake satellite data at 10m resolution for 400,000 square kilometers. It’s designed to be overlaid with hillshading and vegetation, not stand on its own.

If you’d like to recreate it, here’s an AI-generated overview of the pipeline and my Gaea 2 file (if you use it, you’ll likely want to adjust the file paths).

Limitations

Tiles with a lot of development and agriculture have a cloudy look thanks to the blurring and the smaller number of valid pixels to work with. The west side of the below image (which excludes hillshading and vegetation), where urban Jerusalem is located, has an indistinct feel to it. The hillshading and vegetation cover up this haziness in the final image, but some of it does leak through.

The same view around the Dead Sea without hillshading and vegetation.

In mountainous areas, not all the color depth is preserved. The below satellite view of part of the Sinai peninsula shows darker tones in the mountains and more contrast in the drainage areas, compared to the synthetic view. The orange area in the northwest also shows up better in the satellite view. When compared side-by-side, the synthetic view feels like a render, lacking some heft.

Synthetic and satellite views of the area around Jebel Katherina in the Sinai peninsula.

I didn’t try this technique outside my area of interest, so it may not apply to other, less-arid biomes.

Conclusion

This method is a decently scalable way to generate realistic-looking synthetic satellite views. The result holds up well from scales of 1:1,000,000 (though at that scale, I’d just use Natural Earth II plus vegetation) down to scales of 1:125,000 or so. For historical mapping (such as for Bible maps), it recreates a plausible (but stylized) view of how the terrain might have looked in the past, before modern urban infrastructure. It gives a modern feel to a view of the past.

Recent Hillshading Advances for Bible Maps

Sunday, February 1st, 2026

Since 2015, three major hillshading advances have allowed for more attractive but still accurate and efficient-to-create maps than before: advances in data, surfaces, and lighting.

(“Hillshading” means using shadow, light, and sometimes color to turn raw elevation data into something easily understandable by humans.)

Data advances: 30m digital elevation models

From 2003 through August 2015, 90m-per-pixel SRTM data was the best available resolution for the Middle East. Consequently, Bible atlases produced during this time have hillshading that looks something like the following, which is based on this data. (All the maps in this post show an area around the Dead Sea.)

Lambert hillshade of the area around the Dead Sea with SRTM 90m as the data source.

NASA released 30m-per-pixel elevation data in 2015, which means 9x more resolution is available. Everything feels crisper, though the extra detail makes the larger structures harder to discern:

Lambert hillshade of the area around the Dead Sea at a resolution of 30m per pixel.

Surface advances: Eduard

The above hillshading style, called “Lambertian,” derives from the 1700s. It’s computationally inexpensive (an algorithm describes it in 1981, and it can run on 1992-era computer hardware) and produces a decent result. This algorithm remains popular today; the standard ArcGIS hillshade function takes essentially the same approach.

Lambertian hillshading appeals to a modern desire for precision and accuracy when compared to older, manual hillshading methods. Since an algorithm is producing the hillshade, the viewer should be able to have confidence that they’re seeing a true depiction of the world. 1992’s Hammond Atlas of the World was the “first all-digital world atlas;” its introduction mentions “producing maps more accurately and more efficiently than ever before.”

In an AI era, however, we no longer have the luxury of believing that an algorithm neutrally presents reality. Algorithms shape us as much as we shape them. Lambertian hillshading presents a view of reality, but it’s not necessarily more “accurate” than manual hillshading; its purpose is to approximate pixel-level lighting, which is reflecting a computationally efficient point of view on what’s important to depict.

More practically, the main problem with Lambertian hillshading is that it “looks sort of like wrinkled tinfoil; full of sharp edges.” It’s busy, creating lots of detail while obscuring larger- and smaller-scale structures. So it’s accurate, but it doesn’t communicate well. By contrast, manual hillshading didn’t necessarily prioritize accuracy but emphasized helping the viewer understand the terrain’s structure. There are ways to make Lambertian hillshading read better (such as resolution bumping), but we now have better algorithms available.

Specifically, we have algorithms that mimic manual hillshading. Eduard (which I’ve mentioned previously) came out in 2022 and is specifically designed to recreate the look of twentieth-century Swiss cartographers, who “were widely regarded as preeminent in the development of printed maps that demonstrated a more naturalistic approach to relief portrayal.”

Eduard models surfaces better by addressing the question, “What form should the viewer see?” Rather than just modeling light (as Lambertian hillshading does), it employs multi-scale smoothing (suppressing noise compared to Lambertian’s pixel independence), a ridge/valley emphasis, and appropriate generalization to emphasize structure.

The below map, created with Eduard, uses the same 30m source DEM as the previous map but makes overall geomorphology clearer; small structures coalesce into larger ones, and ridges and valleys are clearer.

An Eduard-created hillshade of the same area makes structure clearer.

Eduard also generalizes well. The below map makes the overall structure of the Old Testament’s “Promised Land” clear, with coastal plains on the west moving into foothills, then into a central, hilly spine that gives way quickly to a rift valley with the Jordan River. This map preserves the large structures that allow the viewer to focus on the big picture.

A zoomed-out view of the eastern Mediterranean, reaching from Egypt to Jordan up to Syria in the north. The relief is abstracted well for the scale.

Lighting advances: sky models

The final advance since 2015 involves the physics of rendering lighting. Daniel Huffman blogged about using Blender for shaded relief in 2013 and popularized it in a 2017 tutorial. This technique involves using 3D modeling software to produce more-realistic shadows than Lambertian shading does.

(ArcGIS introduced multidirectional hillshades in 2014, which is a refinement to the standard Lambertian approach but still creates an unnatural plastic effect to my eye. They also introduced several more hillshading tools in 2015.)

The below map uses the Sky Model in Terrain Shader Toolbox plugin for QGIS to produce a Blender-like effect using just shadows. (Check out this video for more background on this plugin.) The Sky Model creates 200 lighting snapshots from different angles and then combines them to produce a strong and dramatic shadowing effect. The Arnon gorge in the bottom right is clearly visible, as is the El Buqeia valley near the northwestern coast of the Dead Sea. It also captures the drama of gorges along the western coast of the Dead Sea.

A skybox view of the same Dead Sea area shows much more dramatic relief.

Combining Approaches

The sky-model (or skybox) approach does have drawbacks; it compellingly preserves local features but doesn’t generalize them well. The best overall approach, in my opinion, is to combine 30m Eduard shading with the sky model, reducing their opacity so that they don’t overwhelm the landscape. This approach combines the generalizing features from Eduard with the detailed shadows from the sky model to produce an accurate, easy-to-understand hillshade:

Conclusion

Recent advances in data, surfaces, and lighting make hillshading from even ten years ago feel low resolution and computationally sterile. HIllshading from 1990 to 2020 fits into a historical era when “accuracy” and “efficiency” came to the forefront. It was based on the best data and techniques at the time, but new techniques allow us to move beyond Lambertian hillshading.

I expect that future Bible cartography will use these advances to produce attractive and understandable relief maps where the terrain depiction supports the map’s purpose, contributing to the map’s story without being distracting.