You can probably use A.I. to tune (A) the behavior of a physical system to (B) some other system which you are observing, using the environmental controls you have over (A). You would under this view let the machine figure out how to set the environmental controls over (A) to model the behavior of (B), given the analogous inputs, and a set of analogous measurements.
The rational path seems to be to give the machine an enormous set of measurements over both, and allow the machine to discover correlations on its own. This would require delivering simultaneous signals to both (A) and (B), and allowing the machine to discover these correlations. This will eventually allow the machine to discover what set of signals that control (A) correspond to analogous signals that control (B). You can then use this to figure out how to achieve a desired end state of (B) given (A), or more generally, model the behavior of (B), given (A), without in anyway disrupting the status of (B). This is a longwinded way of saying that we probably don’t need to test anything on human beings anymore, until we’re well into the realm of confidence in a particular solution, and in that case, you get their consent.
This could allow you to build a small model of a massive system, that behaves in a scaled manner.