The description of the environment in which a biomedical simulation operates (model context) and parameterization of internal model rules (model content) requires the optimization of a large number of free-parameters; given the wide range of variable combinations, along with the intractability of ab initio modeling techniques which could be used to constrain these combinations, an astronomical number of simulations would be required to achieve this goal. In this work, we utilize a nested active-learning workflow to efficiently parameterize and contextualize an agent-based model of sepsis.nnMethodsBillions of microbial sepsis patients were simulated using a previously validated agent-based model (ABM) of sepsis, the Innate Immune Response Agent-Based Model (IIRABM). Contextual parameter space was examined using the following parameters: cardio-respiratory-metabolic resilience; two properties of microbial virulence, invasiveness and toxigenesis; and degree of contamination from the environment. The models internal parameterization, which represents gene expression and associated cellular behaviors, was explored through the augmentation or inhibition of signaling pathways for 12 signaling mediators associated with inflammation and wound healing. We have implemented a nested active learning approach in which the clinically relevant model environment space for a given internal model parameterization is mapped using a small Artificial Neural Network (ANN). The outer AL level workflow is a larger ANN which uses a novel approach to active learning, Double Monte-Carlo Dropout Uncertainty (DMCDU), to efficiently regress the volume and centroid location of the CR space given by a single internal parameterization.nnResultsA brute-force exploration of the IIRABMs content and context would require approximately 3*1012 simulations, and the result would be a coarse representation of a continuous space. We have reduced the number of simulations required to efficiently map the clinically relevant parameter space of this model by approximately 99%. Additionally, we have shown that more complex models with a larger number of variables may expect further improvements in efficiency.