Bridging the Gap in GEOINT Analysis Training

Sam Leung • April 9, 2021

Whitespace recently had the opportunity to discuss the “Evolution of 3D Terrain Models” with an outstanding panel organized by USGIF. If you haven’t already, check out some highlights here. Not only did we get to share our thoughts with the GEOINT Community, we also experienced other exciting developments in the field.

Of note was a discussion on the USGIF Modeling, Simulation and Gaming Working Group’s most recent position paper, “Advancing the Interoperability of Geospatial Intelligence Tradecraft with 3D Modeling, Simulation and Game Engines.” This paper presents improvements that would make simulations more useful for GEOINT training and tradecraft development. The core theme of the paper and reiterated by the panel discussion was the idea that these improvements could be made by focusing primarily on the sensory aspects (i.e., how they look, sound, or even smell) of simulations and simulation infrastructure.

While this paper highlights the salient technological challenges of 3D/4D simulation itself, as well as potential ways to improve 3D/4D simulation engines and related data challenges, we want to call attention to a different but associated gap in the current research: how can we improve what is happening in simulations?

Bridging this gap is critical for modern GEOINT analysis training.

Here’s the challenge. To date, most of the work supports military training that concentrates on the sensory aspect of training simulations. For GEOINT analysis training, however, the need is for immersive and realistic GEOINT data “collected” from the activities of simulated entities within the simulated environment. To train analysts in modern analytic skills, simulated GEOINT data should hold weeks of data from various types of collections. The analyst should see simulated entities going about their daily lives with background entities that are indistinguishable from the tiny fraction of entities and activities that are the target of analysis.

When diving deeper into a simulated environment, a student analyst should be able to find realistic behaviors and relationships of both the ‘good guys’ and the ‘bad guys’ – and thousands of background entities. Students should never get the uneasy feeling that they are in a digital Hollywood movie set, where there isn’t much going on behind a lifelike façade.

Sensory GEOINT data still has a role to play in simulated GEOINT analytic training.  After all, a training data set should reflect the full variety of data an analyst can encounter at work. Non-sensory simulated data sets with embedded intelligence problems (like the WorldLine data sets being used in some of NGA’s analytic training courses) can be made more effective as training tools by incorporating simulated video and image data that mimic real sensory GEOINT data. We can use simulated 4D ground-truth and descriptive metadata of simulated entities and settings (i.e., sensory data like FMV video, overhead imagery, social media photos, CCTV video, and other relevant data) to create an even more immersive and effective GEOINT training environment.

As researchers, developers, and analysts, we want to make the simulated activities that we train new analysts on as realistic as possible. We can do this by making the simulated activities within simulated environments more realistic. To do so, we need to compose simulated data environments with both sensory and non-sensory data. When GEOINT analysts train in more real-world environments, they will be better prepared for the real world.

Back to Blog
In the Whitespace