使用深度学习进行太空应用的多智能体运动计划

Submitted by neurta on Mon, 10/19/2020 - 09:54
太空,深度学习

Each of the above two cases has motion dynamics. The spacecraft motion of the former is controlled by the double integrator dynamics, where the thrust directly produces a proportional acceleration in that direction. In the latter, the equation of motion involves the effect of gravity, and can be linearized to the Clohessy-Wiltshire equation [24] when described in relation to the main spacecraft in a circular orbit. In both cases, the goal of the swarm motion planning is to generate a minimum fuel trajectory for each spacecraft in the swarm, transitioning from the current state to the assigned goal state. More specifically, the objective function is to minimize the total L1-norm of fuel consumption for the entire team of spacecraft. Each spacecraft is considered homogenous of the same shape and has full control over its operation, which can apply thrust in all x, y, z directions. The trajectory generated must be within given actuation limits in each direction, and collisions involving given obstacles and other spacecraft in the environment must be avoided. We created datasets with increasing complexity from 1 to 10 agents to solve single and multi-agent problems. Each data set contains the optimal trajectories (ground truth) for a defined problem that is required to fully train a deep neural network. In order to generate the optimal motion trajectory, we used an optimization-based method that utilizes both sequential convex programming and recently developed parabolic relaxation [25]. The problem definition is first transformed into the form of the quadratic constrained quadratic programming (QCQP) with the corresponding objective function and dynamical/physical constraints. The QCQP formulation is then processed through a series of transformations including relaxation and penalization to efficiently generate optimal trajectories. A. Gravity-Free 2D Double Integrator (1 Agent and 1 Obstacle) We first tackled the simple problem of a single agent moving in a 2D environment without gravity. The agent must move from the initial position to the target position while avoiding any single static obstacles present. The transition must be done within the T time step, minimizing the control input (fuel consumption). The state of the agent and obstacle is a 4 degrees of freedom (dof) parameter that includes both position and velocity. The control input is a 2 dof parameter in both x and y directions and the motion is controlled by 2D double integrator dynamics. State of Agent: , State of Obstacle: (1) Control Input: (2) Dynamics (2D Double-Integrator): , (3) The shape of the agent is assumed to be circular with a radius of 0.1. Although the actual shape of the spacecraft is different, using an outer circular approximation of the agent shape is a standard practice in motion planning that simplifies collision detection. The radius of the obstacle is set to 0.1 and an additional minimum clearance distance of 0.1 is applied between the spacecraft and the obstacle, acting as a safety buffer to account for possible conditions and operational uncertainties. The data set has the following inputs: First, a single agent starts at position (0,0) in a stationary state (speed 0) and reaches a position (1,1) where it stops at all problem instances. The position of the obstacle (a, b) is chosen randomly between 0 and 1 in a uniform distribution. The control input (thrust in x and y 3 directions) is limited to a maximum of 0.1, and the number of time steps to reach the target position is set to T = 10. , (4) (5) , (6) The data set has output in the form of trajectories and control inputs. The trajectory gives the state of the agent X(t) at each time step from t = 0 to t = T and the control input specifies the action U(t) applied at each time step from t = 0 to t = T-1, with T = 10. , (7) B. Gravity-Free 2D Double Integrator (10 Agents) This issue is defined in the same way as the single agent case above, only the differences are detailed here. First, the number of agents increases from 1 to 10, so now the state is a 40 dof parameter that describes the position and velocity of all 10 agents. The control input is also a 20 dof parameter that specifies the behavior of all 10 agents. In this multi-agent setup, each agent now acts as an obstacle that other agents should avoid. Because of this characteristic, the single obstacle required for collision avoidance training is no longer required and thus removed from the problem definition. The 10 agents are initially randomly placed inside a square of unit dimension. The goal state is defined so that 10 agents form a circular shape with a radius of 0.6 at the center of the unit square. Each agent has a radius of 0.05 and an additional minimum spacing of 0.05 is imposed. C. Relative Orbit Transfer near Earth (1 Agent) In this problem, the deputy spacecraft is in orbit around the Earth, and its motion is explained in relation to the chief spacecraft in a circular orbit. We assume that the relative distance between the deputy and the chief is sufficiently close and ignore the J2 perturbation effects such that the Clohessy-Wiltshire equation of motion holds. The deputy’s state is described in the Local-Vertical, Local-Horizontal (LVLH) frame attached to the chief, where the x direction points radially away from Earth towards the chief, the y direction points the direction of velocity of the chief, and the z direction points to the direction of the chief's angular momentum. The state is a 6 dof parameter that includes both relative position and velocity. The control input is a 3 dof parameter that describes the thrust in the x, y and z directions, respectively. 4 State of Agent: , Control Input: (8) Dynamics (CWH): (9) Also, we assume that the deputy spacecraft is in a stable passive relative orbit (PRO) centered around the chief, which is a thrust-free orbit that can maintain a certain relative distance from the chief spacecraft. Concentric PRO ensures collision avoidance between deputies [26]. This imposes an energy matching condition of Eq 10 and a concentric PRO condition of Eq 11. Similar to the case without gravity, we used a solid outer sphere approximation to represent the geometry of the deputy. The radius of the deputy is set to 5 and an additional minimum clearance of 5 is imposed for safety. Energy matching: (10) Concentric PRO in X-Y plane about origin: (11) The data set has the following inputs: The deputy spacecraft is initially in a PRO with a phase given at time t = 0 and must transfer to another PRO with a phase specified at time t = T. The starting and target PROs are chosen in such a way that they are constrained to the x-y plane for simplicity and to the semi-major axis randomly selected between 25 and 75 in a uniform distribution. The initial and target phases within the PRO are also randomly chosen between 0 and 2π. The control input is limited to a maximum of 10 thrust forces in each x, y, z direction. The number of time steps to reach the target state is set to T = 100. The data set has output in the form of trajectories and control inputs. The trajectory gives the state of the agent X(t) at each time step from t = 0 to t = T and the control input specifies the actuation U(t) applied at each time step from t = 0 to t = T-1, with T = 10. , (12) 5 D. Relative Orbit Transfer near Earth (10 Agents) This issue is a direct extension of the single agent case above. Instead of having only one deputy spacecraft, now 10 deputy spacecraft perform PRO transfers simultaneously. The problem definition is the same as above, but the state and control inputs are different. The state is now a 60 dof parameter describing the position and velocity of all 10 deputy spacecraft, and similarly the control input is a 40 dof parameter. Fig 1 shows an example of a problem instance defined here for a single deputy and all 10 deputy cases. Fig 1. Examples of passive relative orbit (PRO) transfers for a single deputy spacecraft (left) and 10 deputy spacecraft (right). The deputy must be transferred from the initial PRO to the target PRO where the phases are assigned. The minimum fuel consumption trajectory is computed over multiple instances of the problem to work with the dataset required for deep neural network training. IV. Deep Neural Network Architecture We used a multilayer neural network made up of multiple densely connected layers with an activation function and a dropout layer. We used rectified linear units for the nonlinear activation function: g(x) = max(0, x) where x is the input to a neuron. Rectifier is the most used activation function for deep neural networks that have biological validity and better gradient propagation with less vanishing gradient problems. We used a dropout regularization rate of 0.5 to prevent overfitting. Neural network training aims to minimize the error function E , the mean squared error (MSE) that quantifies the difference between the computed output trajectory and velocity of the neural network, and the true trajectory and velocity of the mathematical model {X, Y , X′, Y ′} for input initial and final position and velocity. Given N time steps, {X, Y , X′, Y ′} is defined as {(x , y , x , y ), ... , (x , y , x , y )} . The error function is 1 1 1 ′ 1 ′ N N N ′ N ′ E E(X, Y , X , Y ) ((x ) ) ︿ ︿ ︿ ′ ︿ ′ = 1 2N ∑ N i=1 i ︿ − xi 2 + (y ) i ︿ − yi 2 + (x ) i ︿ ′ − xi ′ 2 + (y ) i ︿ ′ − yi ′ 2 = ((g(w ) ) 1 2N ∑ N i=1 ∑ 2 j=1 x j · xj + bx j − xi 2 + ((g(wy ) ) j · yj + by j − yi 2 + ((g(w ) ) ) xj ′ · xj ′ + bxj ′ − xi ′ 2 + ((g(w ) ) yj ′ · yj ′ + byj ′ − yi ′ 2 (13) where { x , , , } denotes the output of the neural network on input where for the i ︿ yi ︿ xi ︿ ′ yi ︿ ′ {x , y , x , y } j j j ′ j ′ j = {1, 2} start and stop position and velocity. The objective is to change w and b such that E is as close to zero as possible. The weights and biases were updated according to the equation wi+1 = wi − α ∂wi ∂E(X, Y , X, Y ) ︿ ︿ (14) 6 bi+1 = bi − α ∂bi ∂E(X, Y , X, Y ) ︿ ︿ (15) where w and are the values of and after the th iteration of gradient descent, and is the partial i bi w b i ∂x ∂f derivative of f with respect to x . α is the learning rate where α=0.001. Based on the chain rule, the weight delta and bias delta are Δw = w (x )g (h )x i+1 − wi = 1 N ∑ N i=1 ∑ 2 j=1 α i − xi ︿ ′ i j (16) Δb = b (x )g (h ) i+1 − bi = 1 N ∑ N i=1 ∑ 2 j=1 α i − xi ︿ ′ i (17) where x (w ) and . After the completion of training, the weights and biases are optimized i ︿ = g · xj + b hi = w · xj + b to minimize the error function E . The number of layers and parameters were empirically determined by investigating the performance of several different layers (3-12) and parameters (10-200). We found that 100 parameters per layer and 4 layers showed the minimum mean squared error (MSE) (Fig 2). Fig 2. Parameter optimization by examining different numbers of parameters and layers of neural networks. MSE is minimized in 100 parameters and 4 layers. A. 2D Double Integrator The simplest toy model, a 2D double integrator, a second-order control system was used to model the dynamics of a simple agent in a two-dimensional space under the influence of a time-varying force input. For each agent, 12 parameters were used as input, including initial position (x , y ) , velocity , i i (x , y ) i ′ i ′ acceleration (x , y ) , and final position , velocity , acceleration . For each i ′′ i ′′ (x , y ) f f (x , y ) f ′ f ′ (x , y ) f ′′ f ′′ obstacle, 3 parameters were used as input, including position (xo, yo ) and radius (r) . Considering 10 agents, the total number of input parameters is 120. The output size of the network depends on the temporal resolution. For the number of time steps, T = 10, 60 parameters are determined as the output (T = 10 time steps × 6 (position, velocity, acceleration) = 60 parameters). Among the output parameters, only the trajectory and velocity were included, and the acceleration information can be excluded (only 40 output parameters left). This is because the acceleration information can be computed later using the differential relationship between acceleration, velocity, and position. Reducing the number of output parameters affects the number of optimized hidden layers and the required parameters, so we used this method to save memory and increase computational efficiency. B. 3D Passive Relative Orbit (PRO) Transfer As a more realistic model, we investigated PRO transfer. For each agent, 18 parameters were used as input, including initial position (x , y , z ) , velocity , acceleration , and final position i i i (x , y , z ) i ′ i ′ i ′ (x , y , z ) i ′′ i ′′ i ′′ 7 (x , y , z ) , velocity , acceleration . Considering 10 agents, the total number of f f f (x , y , z ) f ′ f ′ f ′ (x , y , z ) f ′′ f ′′ f ′′ input parameters is 180. Similarly, the output size of the network was 90 parameters (T = 10 with 9 position, velocity, and acceleration parameters). Compared to the 2D double integrator, 3D PRO transfer differs in that the path planning requires not only additional dimensions but also gravity constraints to be considered. V. Deep Neural Network Training and Testing Results A. 2D Double Integrator A single agent and a single obstacle case were tested as the simplest model. The results showed accurate position and velocity estimation (RMSE = 0.0129 ± 0.0088) using a deep learning-based numerical model (Fig 3). The computational efficiency per estimated condition improved from a few seconds to 5 10 -4 × seconds (approximately 1000 times improvement). Neural network training took from 3 to 5 hours (NVIDIA GTX 1080). Fig 3. An example of accurate location, velocity and acceleration estimation of a neural network numerical model compared to the ground truth mathematical model (RMSE =0.0108). Blue circles represent single agent trajectories. Arrows indicate velocity at a given position. A single obstacle marked in red is fixed (radius = 0.1). The outer red circle of the obstacle represents the safety distance to avoid collision between the obstacle and the agent. It is defined as the safety radius = obstacle radius + 0.1. The next step was to investigate a 2D 10 agents model with various training data sizes. Deep neural networks were trained using 1,000, 10,000, and 100,000 data. We randomly selected 70% of the given trajectory data for training, 15% of the data for validation during training, and 15% of the data for final testing. Training stopped when validation loss did not decrease to prevent overfitting. As more training data were added, the trajectory generated by neural networks became closer to that of the ground truth (Fig 4). We added the acceleration of each agent at each time step using the L1 norm, which represents the fuel consumption of each agent as a measure of model performance. As the number of training data increased, the fuel consumption of neural network-based trajectory estimation decreased. When training with 100,000 data, fuel consumption fell close to the ground truth fuel consumption (mean ground truth fuel consumption = 1.91±0.46, mean neural net fuel consumption = 2.23±0.21). The difference between the two groups was not significant (P=0.24, two-sample t-test) (Fig 5). 8 Fig 4. An example of a 2D 10 agent model with various training data sizes. Trajectory modeling improved as the number of training data increased. Blue circles represent agent trajectories.Red circles represent the safety distance to avoid collision between the agents. In the bottom left plot, text “S” represents the start position, and text “F” represents the final position. Fig 5. Fuel consumption of the ground truth and neural network models with different training data sizes. The fuel consumption of neural network modeling decreased as the number of training data increased. B. 3D PRO Transfer We also tested neural network-based PRO transfer estimation in 3D space with gravity constraints. We found that the path was accurately estimated using neural networks (Fig 6). It took 0.99 seconds to generate 2,000 path estimation results using a neural network and 1.7 hours to generate the same 2,000 optimal path plans using the convex optimization model, showing a 6,000-fold increase in computational efficiency. 9 Fig 6. Representative examples of accurate path estimation of a single agent model in 3D PRO transfer conditions. The path generated by the neural network was very similar to the path of the ground truth. The computational efficiency improved by 6,000 times using neural networks. We expanded the problem with 10 agents in 3D space. The problem was much more complex. We trained the neural network using 30,000 ground truth data, but the neural network did not succeed in learning collision avoidance and path planning of 10 agent interactions (Fig 7). As a result, the neural network fuel consumption was much higher than the optimal ground truth fuel consumption. We assume that the neural network training was largely unsuccessful because of the high complexity of the problem (10 agents in 3D space) and insufficient training data size. We used 30,000 training data for this result, and we suggest generating up to 100,000 training data to provide the full breadth of 3D 10 agent dynamics for training neural networks. Another solution would be to limit the complexity of the problem by limiting parameters such as start and stop locations of each agent to the same location in all datasets. Fig 7. Inaccurate path estimation examples of neural networks in a 10 agent 3D model. The trajectory of 10 agents did not follow the ground truth, and the fuel consumption of the neural network model was much higher than the ground truth. VI. Application to Space The results of this study demonstrate the potential that deep neural network planner has in generating minimum fuel optimal trajectories with training, at a significantly low computational cost compared to traditional trajectory optimization. Although the technology is at its infancy and has to be matured to be flight-ready, it opens up the possibility of having real-time and on-board computation of trajectories for a mission that involves a large fleet of spacecraft. Such capability will act as a key component of autonomy that enables operation of future swarm spacecraft missions that hold many opportunities for the space industry in general [18]. 10 The first category of swarm spacecraft missions that can greatly benefit from is on-orbit servicing. With advent of SmallSats and ever increasing number of SmallSats being launched by both the public and private space industry (e.g. Starlink from SpaceX), there is growing need for servicing of space assets in orbit around the Earth [27]. The on-orbit servicing may consist of inspecting, docking, and manipulating a space asset, all of which could be greatly enhanced by use of multiple spacecraft performing the task [28]. On-orbit servicing using a fleet of spacecraft has to have a real-time trajectory planning to avoid collisions and perform necessary reconfiguration for a given task. With a deep neural network planner on-board, active reconfiguration of a large fleet of spacecraft can be performed even with constrained on-board computation power. The second category is Earth science missions that benefit from having multiple measurements of Earth phenomena. For example, multiple measurements from hundreds of SmallSats in low Earth orbit can be used to gather the information needed to generate a high-resolution spatiotemporal 3D cloud map. Multi-angle observations already allow estimation of the cloud top height and horizontal position by stereo-imaging and tracking of image features [29]. A tomographic approach yields volumetric information about clouds [30,31]. Specifically, the 3D distribution of liquid water combined with the droplet size distribution are essential to improve the representation of clouds in climate models and cloud process modeling. Furthermore, the benefit of multi-angle observations from a low Earth orbit is not only constrained to clouds, but provides valuable information about aerosol plumes: 3D information about the mass and particle properties of smoke and dust plumes helps advance monitoring and nowcasting of air quality and the spread of wildfires, volcanoes, and sand storms. VII. Conclusion In this study, we showed that deep neural networks can accurately estimate multi-agent path plans using training data generated from existing mathematical models. Models were tested with single and 10 agents in 2D and 3D space. The accuracy of the generated trajectories and the fuel consumption were comparable to those of the ground truth. It also showed that the computational efficiency dramatically improved (1,000x-6,000x depending on the complexity of the problem), which can be applied to enable real-time, distributed, on-board multi-agent path planning for large fleets of spacecraft (>100). The results of this study demonstrate the potential of deep neural network models in bridging the technology gap that exists in the scalability of multi-agent motion planning. This technical capability is essential for operating large-scale spacecraft that cannot rely solely on ground guidance and enables future scientific missions that will greatly benefit from multi-angle space measurements. For example, multiple measurements from hundreds of SmallSats in low Earth orbit can be used to gather the information needed to generate a high-resolution spatiotemporal 3D cloud map, which solves the major uncertainties in climate science and the challenges of weather forecasting. In order to mature the deep neural network-based planning technology, it is necessary to increase the path planning accuracy of the neural network. Therefore, we propose a hybrid approach that utilizes a convex optimization model to fine tune the trajectories generated by the neural network to improve accuracy and robustness. Moreover, reinforcement learning is considered as an unsupervised, self-learning, adaptive system independent of the convex optimization model. Acknowledgments This research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004). References [1] LaValle, S. M. Planning Algorithms. Cambridge university press, 2006. [2] Canny, J. Some Algebraic and Geometric Computations in PSPACE. 1988. [3] Kavraki, L. E., Svestka, P., Latombe, J.-C., and Overmars, M. H. “Probabilistic Roadmaps for Path Planning in High-Dimensional Configuration Spaces.” IEEE transactions on Robotics and Automation, Vol. 12, No. 4, 1996, pp. 566–580. 11 [4] LaValle, S. M., and Kuffner Jr, J. J. “Randomized Kinodynamic Planning.” The international journal of robotics research, Vol. 20, No. 5, 2001, pp. 378–400. [5] Karaman, S., and Frazzoli, E. “Sampling-Based Algorithms for Optimal Motion Planning.” The international journal of robotics research, Vol. 30, No. 7, 2011, pp. 846–894. [6] Koenig, S., Likhachev, M., and Furcy, D. “Lifelong Planning A∗.” Artificial Intelligence, Vol. 155, Nos. 1–2, 2004, pp. 93–146. [7] Augugliaro, F., Schoellig, A. P., and D’Andrea, R. Generation of Collision-Free Trajectories for a Quadrocopter Fleet: A Sequential Convex Programming Approach. 2012. [8] Bonalli, R., Cauligi, A., Bylard, A., and Pavone, M. GuSTO: Guaranteed Sequential Trajectory Optimization via Sequential Convex Programming. 2019. [9] Demaine, E. D., Hendrickson, D. H., and Lynch, J. “Toward a General Theory of Motion Planning Complexity: Characterizing Which Gadgets Make Games Hard.” arXiv, 2018, p. arXiv–1812. [10] Solovey, K., and Halperin, D. “On the Hardness of Unlabeled Multi-Robot Motion Planning.” The International Journal of Robotics Research, Vol. 35, No. 14, 2016, pp. 1750–1759. [11] Johnson, J. K. “On the Relationship between Dynamics and Complexity in Multi-Agent Collision Avoidance.” Autonomous Robots, Vol. 42, No. 7, 2018, pp. 1389–1404. [12] Dobson, A., Solovey, K., Shome, R., Halperin, D., and Bekris, K. E. Scalable Asymptotically-Optimal Multi-Robot Motion Planning. 2017. [13] Čáp, M., Novák, P., Vokřínek, J., and Pěchouček, M. “Multi-Agent RRT*: Sampling-Based Cooperative Pathfinding.” arXiv preprint arXiv:1302.2828, 2013. [14] Wagner, G., and Choset, H. “Subdimensional Expansion for Multirobot Path Planning.” Artificial Intelligence, Vol. 219, 2015, pp. 1–24. [15] Yu, J., and LaValle, S. M. “Optimal Multirobot Path Planning on Graphs: Complete Algorithms and Effective Heuristics.” IEEE Transactions on Robotics, Vol. 32, No. 5, 2016, pp. 1163–1177. [16] Hönig, W., Preiss, J. A., Kumar, T. S., Sukhatme, G. S., and Ayanian, N. “Trajectory Planning for Quadrotor Swarms.” IEEE Transactions on Robotics, Vol. 34, No. 4, 2018, pp. 856–869. [17] Morgan, D., Chung, S.-J., and Hadaegh, F. Y. “Model Predictive Control of Swarms of Spacecraft Using Sequential Convex Programming.” Journal of Guidance, Control, and Dynamics, Vol. 37, No. 6, 2014, pp. 1725–1740. [18] Rahmani, A. “Swarm of Space Vehicles and Future Opportunities.” JPL Blue Skies Study, 2018. [19] Shi, G., Hönig, W., Yue, Y., and Chung, S.-J. “Neural-Swarm: Decentralized Close-Proximity Multirotor Control Using Learned Interactions.” arXiv preprint arXiv:2003.02992, 2020. [20] Tolstaya, E., Gama, F., Paulos, J., Pappas, G., Kumar, V., and Ribeiro, A. Learning Decentralized Controllers for Robot Swarms with Graph Neural Networks. 2020. [21] Zhou, G., Moayedi, H., Bahiraei, M., and Lyu, Z. “Employing Artificial Bee Colony and Particle Swarm Techniques for Optimizing a Neural Network in Prediction of Heating and Cooling Loads of Residential Buildings.” Journal of Cleaner Production, Vol. 254, 2020, p. 120082. [22] Bui, Q.-T., Nguyen, Q.-H., Nguyen, X. L., Pham, V. D., Nguyen, H. D., and Pham, V.-M. “Verification of Novel Integrations of Swarm Intelligence Algorithms into Deep Learning Neural Network for Flood Susceptibility Mapping.” Journal of Hydrology, Vol. 581, 2020, p. 124379. https://doi.org/10.1016/j.jhydrol.2019.124379. [23] Fang, Y., Wang, Z., Gomez, J., Datta, S., Khan, A. I., and Raychowdhury, A. “A Swarm Optimization Solver Based on Ferroelectric Spiking Neural Networks.” Frontiers in neuroscience, Vol. 13, 2019, p. 855. [24] Clohessy, W. H., and Wiltshire, R. S. “Terminal Guidance System for Satellite Rendezvous.” Journal of the Aerospace Sciences, Vol. 27, No. 9, 1960, pp. 653–658. [25] Madani, R., Kalbat, A., and Lavaei, J. “A Low-Complexity Parallelizable Numerical Algorithm for Sparse Semidefinite Programming.” IEEE Transactions on Control of Network Systems, Vol. 5, No. 4, 2017, pp. 1898–1909. [26] Morgan, D., Chung, S.-J., Blackmore, L., Acikmese, B., Bayard, D., and Hadaegh, F. Y. “Swarm-Keeping Strategies for Spacecraft under J2 and Atmospheric Drag Perturbations.” Journal of Guidance, Control, and Dynamics, Vol. 35, No. 5, 2012, pp. 1492–1506. [27] Broderick, D. Intellectual Property and Licensing in the Commercial Space Age. 2020. [28] Bernhard, B., Choi, C., Rahmani, A., Chung, S.-J., and Hadaegh, F. Coordinated Motion Planning for On-Orbit Satellite Inspection Using a Swarm of Small-Spacecraft. 2020. 12 [29] Moroney, C., Davies, R., and Muller, J.-P. “Operational Retrieval of Cloud-Top Heights Using MISR Data.” IEEE Transactions on Geoscience and Remote Sensing, Vol. 40, No. 7, 2002, pp. 1532–1540. [30] Levis, A., Schechner, Y. Y., Aides, A., and Davis, A. B. Airborne Three-Dimensional Cloud Tomography. 2015. [31] Levis, A., Schechner, Y. Y., and Davis, A. B. Multiple-Scattering Microphysics Tomography. 2017.