• Vibe Coding and the Fragmentation of Open Source
    Feb 3 2026
    Why Machine-Writing Code is the Best (and Most Dangerous) Thing for Geospatial: The current discourse surrounding AI coding is nothing if not polarized. On one side, the technofuturists urge us to throw away our keyboards; on the other, skeptics dismiss Large Language Models (LLMs) as little more than "fancy autocomplete" that will never replace a "real" engineer. Both sides miss the nuanced reality of the shift we are living through right now. I recently sat down with Matt Hansen, Director of Geospatial Ecosystems at Element 84, to discuss this transition. With a 30-year career spanning the death of photographic film to the birth of Cloud-Native Geospatial, Hansen has a unique vantage point on how technology shifts redefine our roles. He isn’t predicting a distant future; he is describing a present where the barrier between an idea and a functioning tool has effectively collapsed. The "D" Student Who Built the Future Hansen’s journey into the heart of open-source leadership began with what he initially thought was a terminal failure. As a freshman at the Rochester Institute of Technology, he found himself in a C programming class populated almost entirely by seasoned professionals from Kodak. Intimidated and overwhelmed by the "syntax wall," he withdrew from the class the first time and scraped by with a "D" on his second attempt. For years, he believed software simply wasn't his path. Today, however, he is a primary architect of the SpatioTemporal Asset Catalog (STAC) ecosystem and a major open-source contributor. This trajectory is the perfect case study for the democratizing power of AI: it allows the subject matter expert—the person who understands "photographic technology" or "imaging science"—to bypass the mechanical hurdles of brackets and semi-colons. "I took your class twice and thought I was never software... and now here I am like a regular contributor to open source software for geospatial." — Matt Hansen to his former professor. The Rise of "Vibe Coding" and the Fragmentation Trap We are entering the era of "vibe coding," where developers prompt AI based on a general description or "vibe" of what they need. While this is exhilarating for the individual, it creates a systemic risk of "bespoke implementations." When a user asks an AI for a solution without a deep architectural understanding, the machine often generates a narrow, unvetted fragment of code rather than utilizing a secure, scalable library. The danger here is a catastrophic loss of signal. If thousands of users release these AI-generated fragments onto platforms like GitHub, we risk drowning out the vetted, high-quality solutions that the community has spent decades building. We are creating a "sea of noise" that could make it harder for both humans and future AI models to identify the standard, proper way to solve a problem. Why Geospatial is Still "Special" (The Anti-meridian Test) For a long time, the industry mantra has been "geospatial isn’t special," pushing for spatial data to be treated as just another data type, like in GeoParquet. However, Hansen argues that AI actually proves that domain expertise is more critical than ever. Without specific guidance, AI often fails to account for the unique edge cases of a spherical world. Consider the "anti-meridian" problem: polygons crossing the 180th meridian. When asked to handle spatial data, an AI will often "brute force" a custom logic that works for a small, localized dataset but fails the moment it encounters the wrap-around logic of a global scale. A domain expert knows to direct the AI toward Pete Kadomsky’s "anti-meridian" library. AI is not a subject matter expert; it is a powerful engine that requires an expert navigator to avoid the "Valley of Despair." Documentation is Now SEO for the Machines We are seeing a counterintuitive shift in how we value documentation. Traditionally, README files and tutorials were written by humans, for humans. In the age of AI, documentation has become the primary way we "market" our code to the machines. If your open-source project lacks a clean README or a rigorous specification, it is effectively invisible to the AI-driven future of development. By investing in high-quality documentation, developers are engaging in a form of technical SEO. You are ensuring that when an AI looks for the "signal" in the noise, it chooses your vetted library because it is the most readable and reliable option available. From Software Developers to Software Designers The role of the geospatial professional is shifting from writing syntax to what Hansen calls the "Foundry" model. Using tools like GitHub Specit, the human acts as a designer, defining rigorous blueprints, constraints, and requirements in human language. The machine then executes the "how," while the human remains the sole arbiter of the "what" and "why." Hansen’s advice for the next generation—particularly those entering a job market ...
    Más Menos
    37 m
  • A5 Pentagons Are the New Bestagons
    Jan 19 2026
    How can you accurately aggregate and compare point-based data from different parts of the world? When analyzing crime rates, population, or environmental factors, how do you divide the entire globe into equal, comparable units for analysis? For data scientists and geospatial analysts, these are fundamental challenges. The solution lies in a powerful class of tools called Discrete Global Grid Systems (DGGS). These systems provide a consistent framework for partitioning the Earth's surface into a hierarchy of cells, each with a unique identifier. The most well-known systems, Google's S2 and Uber's H3, have become industry standards for everything from database optimization to logistics. However, these systems come with inherent trade-offs. Now, a new DGGS called A5 has been developed to solve some of the critical limitations of its predecessors, particularly concerning area distortion and analytical accuracy. Why Gridding the Globe is Harder Than It Looks The core mathematical challenge of any DGGS is simple to state but difficult to solve: it is impossible to perfectly flatten a sphere onto a 2D grid without introducing some form of distortion. Think of trying to apply a perfect chessboard or honeycomb pattern to the surface of a ball; the shapes will inevitably have to stretch or warp to fit together without gaps. All DGGS work by starting with a simple 3D shape, a polyhedron, and projecting its flat faces onto the Earth's surface. The choice of this initial shape and the specific projection method used are what determine the system's final characteristics. As a simple analogy, consider which object you’d rather be hit on the head with: a smooth ball or a spiky cube? The ball is a better approximation of a sphere. When you "inflate" a spiky polyhedron to the size of the Earth, the regions nearest the sharp vertices get stretched out the most, creating the greatest distortion. A Quick Look at the Incumbents: S2 and H3 To understand what makes A5 different, it's essential to have some context on the most popular existing systems. Google's S2: The Cube-Based Grid The S2 system is based on projecting a cube onto the sphere. On each face of this conceptual cube, a grid like a chessboard is applied. This approach is relatively simple but introduces significant distortion at the cube’s vertices, or "spikes." As the grid is projected onto the sphere, the cells near these vertices become stretched into diamond shapes instead of remaining square. S2 is widely used under the hood for optimizing geospatial queries in database systems like Google BigQuery. Uber's H3: The Hexagonal Standard Uber's H3 system starts with an icosahedron—a 20-sided shape made of triangles. Because an icosahedron is a less "spiky" shape than a cube, H3 suffers from far less angular distortion. Its hexagonal cells look more consistent across the globe, making it popular for visualization. H3's immense success is also due to its excellent and user-friendly ecosystem of tools and libraries, making it easy for developers to adopt. However, H3 has one critical limitation for data analysis: it is not an equal-area system. This was a deliberate trade-off, not a flaw; H3 was built by a ride-sharing company trying to match drivers to riders, a use case where exact equal area doesn't particularly matter. To wrap a sphere in hexagons, you must also include exactly 12 pentagons—just like on a soccer ball. If you look closely at a football, you'll see the pentagonal panels are slightly smaller than the hexagonal ones. This same principle causes H3 cells to vary in size. The largest and smallest hexagons at a given resolution can differ in area by a factor of two, meaning that comparing raw counts in different cells is like comparing distances in miles and kilometers without conversion. For example, cells near Buenos Aires are smaller because of their proximity to one of the system's core pentagons, creating a potential source of error if not properly normalized. Introducing A5: A New System Built for Accuracy A5 is a new DGGS designed from the ground up to prioritize analytical accuracy. It is based on a dodecahedron, a 12-sided shape with pentagonal faces that is, in the words of its creator, "even less spiky" than H3's icosahedron. The motivation for A5 came from a moment of discovery. Its creator, Felix Palmer, stumbled upon a unique 2D tiling pattern made of irregular pentagons. This led to a key question: could this pattern be extended to cover the entire globe? The answer was yes, and it felt like uncovering something "very, very fundamental." This sense of intellectual curiosity, rather than a narrow business need, is the foundation upon which A5 is built. A5's single most important feature is that it is a true equal-area system. Using a specific mathematical projection, A5 ensures that every single cell at a given resolution level has the exact same area. This guarantee even accounts for the Earth's ...
    Más Menos
    37 m
  • The Sustainable Path for Open Source Businesses
    Jan 8 2026
    The Open-Source Conundrum Many successful open-source projects begin with passion, but the path from a community-driven tool to a sustainable business is often a trap. The most common route—relying on high-value consulting contracts—can paradoxically lead to operational chaos. Instead of a "feast or famine" cycle, many companies find themselves with more than enough work, but this success comes at a cost: a fragmented codebase, an exhausted team, and a growing disconnect from the core open-source community. This episode deconstructs a proven playbook for escaping this trap: the strategic transition from a service-based consultancy to a product-led company. Through the story of Jeroen Ticheler and his company, GeoCat, we will analyze how this pivot creates a more stable business, a healthier open-source community, and ultimately, a better product for everyone.
    Más Menos
    36 m
  • Free Software and Expensive Threats
    Dec 26 2025
    Open-source software is often described as "free," a cornerstone of the modern digital world available for anyone to download, use, and modify. But this perception of "free" masks a growing and invisible cost—not one paid in dollars, but in the finite attention, time, and mounting pressure placed on the volunteer and community maintainers. This hidden tax is most acute when it comes to security. Jody from Geocat, a long-time contributor to the popular GeoServer project, pulled back the curtain on the immense strain that security vulnerabilities place on the open-source ecosystem. His experiences reveal critical lessons for anyone who builds, uses, or relies on open-source software.
    Más Menos
    34 m
  • Mapping Your Own World: Open Drones and Localized AI
    Dec 18 2025

    What if communities could map their own worlds using low-cost drones and open AI models instead of waiting for expensive satellite imagery?

    In this episode with Leen from HOT (Humanitarian OpenStreetMap Team), we explore how they're putting open mapping tools directly into communities' hands—from $500 drones that fly in parallel to create high-resolution imagery across massive areas, to predictive models that speed up feature extraction without replacing human judgment.

    Key topics:

    • Why local knowledge beats perfect accuracy
    • The drone tasking system: how multiple pilots map 80+ square kilometers simultaneously
    • AI-assisted mapping with humans in the loop at every step
    • Localizing AI models so they actually understand what buildings in Chad or Papua New Guinea look like
    • The platform approach: plugging in models for trees, roads, rooftop material, waste detection, whatever communities need
    • The tension between speed and OpenStreetMap's principles
    • Why mapping is ultimately a power game—and who decides what's on the map
    Más Menos
    33 m
  • From Data Dump to Data Product
    Dec 9 2025

    This conversation with Jed Sundwall, Executive Director of Radiant Earth, starts with a simple but crucial distinction: the difference between data and data products. And that distinction matters more than you might think.

    We dig into why so many open data portals feel like someone just threw up a bunch of files and called it a day. Sure, the data's technically "open," but is it actually useful? Jed argues we need to be way more precise with our language and intentional about what we're building.

    A data product has documentation, clear licensing, consistent formatting, customer support, and most importantly - it'll actually be there tomorrow.

    From there, we explore Source Cooperative, which Jed describes as "object storage for people who should never log into a cloud console." It's designed to be invisible infrastructure - the kind you take for granted because it just works. We talk about cloud native concepts, why object storage matters, and what it really means to think like a product manager when publishing data.

    The conversation also touches on sustainability - both the financial kind (how do you keep data products alive for 50 years?) and the cultural kind (why do we need organizations designed for the 21st century, not the 20th?). Jed introduces this idea of "gazelles" - smaller, lighter-weight institutions that can move together and actually get things done.

    We wrap up talking about why shared understanding matters more than ever, and why making data easier to access and use might be one of the most important things we can do right now.

    Más Menos
    46 m
  • Reflections from FOSS4G 2025
    Dec 2 2025

    Reflections from the FOSS4G 2025 conference

    Processing, Analysis, and Infrastructure (FOSS4G is Critical Infrastructure)

    The high volume of talks on extracting meaning from geospatial data—including Python workflows, data pipelines, and automation at scale—reinforced the idea that FOSS4G represents critical infrastructure.

    • AI Dominance: AI took up a lot of space at the conference. I was particularly interested in practical, near-term impact talks like AI assisted coding and how AI large language models can enhance geospatial workflows in QGIS. Typically, AI discussions focus on big data and earth observation, but these topics touch a larger audience. I sometimes wonder if adding "AI" to a title is now like adding a health warning: "Caution, a machine did this".
    • Python Still Rules (But Rust is Chatting): Python remains the pervasive, default geospatial language. However, there was chatter about Rust. One person suggested rewriting QGIS in Rust might make it easier to attract new developers.
    Data Infrastructure, Formats, and Visualization

    When geospatial people meet, data infrastructure—the "plumbing" of how data is stored, organized, and accessed—always dominates.

    • Cloud Native Won: Cloud native architecture captured all the attention. When thinking about formats, we are moving away from files on disk toward objects in storage and streaming subsets of data.
    • Key cloud-native formats covered included COGs (Cloud Optimized GeoTIFFs), Zarr, GeoParquet, and PMTiles. A key takeaway was the need to choose a format that best suits the use case, defined by who will read the file and what they will use the data for, rather than focusing solely on writing it.
    • The Spatial Temporal Asset Catalog (STAC) "stole the show" as data infrastructure, and DuckDB was frequently mentioned.
    • Visualization is moving beyond interactive maps and toward "interactive experiences". There were also several presentations on Discrete Global Grid Systems (DGGS).
    Standards and Community Action
    • Standards Matter: Standards are often "really boring," but they are incredibly important for interoperability and reaping the benefits of network effects. The focus was largely on OGC APIs replacing legacy APIs like WMS and WFS (making it hard not to mention PyGeoAPI).
    • Community Empowerment: Many stories focused on community-led projects solving real-world problems. This represents a shift away from expert-driven projects toward community action supported by experts. Many used OSM (OpenStreetMap) as critical data infrastructure, highlighting the need for locals to fill in large empty chunks of the map.
    High-Level Takeaways for the Future

    If I had to offer quick guidance based on the conference, it would be:

    1. Learn Python.
    2. AI coding is constantly improving and worth thinking about.
    3. Start thinking about maps as experiences.
    4. Embrace the Cloud and understand cloud-native formats.
    5. Standards matter.
    6. AI is production-ready and will be an increasingly useful interface to analysis.
    Reflections: What Was Missing?

    The conference was brilliant, but a few areas felt underrepresented:

    • Sustainable Funding Models: I missed a focus on how organizations can rethink their business models to maintain FOSS4G as critical infrastructure without maintainers feeling their time is an arbitrage opportunity.
    • Niche Products: I would have liked more stories about side hustles and niche SAS products people were building, although I was glad to see the "Build the Thing" product workshop on the schedule.
    • Natural Language Interface: Given the impact natural language is having on how we interact with maps and geo-data, I was surprised there wasn't more dedicated discussion around it. I believe it will be a dominant way we interact with the digital world.
    • Art and Creativity: Beyond cartography and design talks, I was surprised how few talks focused on creative passion projects built purely for the joy of creation, not necessarily tied to making a part of something bigger.
    Más Menos
    14 m
  • Building a Community of Geospatial Storytellers
    Nov 27 2025

    Karl returns to the Mapscaping podcast to discuss his latest venture, Tyche Insights - a platform aimed at building a global community of geospatial storytellers working with open data.

    In this conversation, we explore the evolution from his previous company, Building Footprint USA (acquired by Lightbox), to this new mission of democratizing public data storytelling.

    Karl walks us through the challenges and opportunities of open data, the importance of unbiased storytelling, and how geospatial professionals can apply their skills to analyze and share insights about their own communities. Karl shares his vision for creating something akin to Wikipedia, but for civic data stories - complete with style guides, editorial processes, and community collaboration.

    Featured Links

    Tyche Insights:

    • Main website: https://tycheinsights.com
    • Wiki platform: https://wiki.tycheinsights.com
    • Example project: https://albanydatastories.com

    Mentioned in Episode:

    • USAFacts: https://usafacts.org
    • QField Partner Program: https://qfield.org/partner
    • Open Data Watch: (monitoring global open data policies)
    Más Menos
    42 m