Studying disentangled representations from unlabelled knowledge is a basic problem in machine studying. Fixing it could unlock different issues, resembling generalization, interpretability, or equity. Though remarkably difficult to unravel in idea, disentanglement is commonly achieved in observe by way of prior matching. Moreover, current works have proven that prior matching approaches may be enhanced by leveraging geometrical concerns, e.g., by studying representations that protect geometric options of the info, resembling distances or angles between factors. Nevertheless, matching the prior whereas preserving geometric options is difficult, as a mapping that absolutely preserves these options whereas aligning the info distribution with the prior doesn’t exist normally. To handle these challenges, we introduce a novel method to disentangled illustration studying primarily based on quadratic optimum transport. We formulate the issue utilizing Gromov-Monge maps that transport one distribution onto one other with minimal distortion of predefined geometric options, preserving them as a lot as may be achieved. To compute such maps, we suggest the Gromov-Monge-Hole (GMG), a regularizer quantifying whether or not a map strikes a reference distribution with minimal geometry distortion. We show the effectiveness of our method for disentanglement throughout 4 normal benchmarks, outperforming different strategies leveraging geometric concerns.
*Equal contribution
**Equal advising
†CREST-ENSAE
‡Helmholtz Munich
§TU Munich
¶MCML
††Tubingen AI Middle