This this imply that DGM are closed under combinations? that is, given DGM G can we find G' that captures k sparese combinations of elements in the range?
Is efficiency the only potential advantage of the other models?
Were these images used in the training of the pre-trained deep generative models?
Don’t think so
Cool! Just checking :P
I have a question; the initial intuition for sparsity is interpretability. Are you aware of a recent paper in UAI19 about Variational Sparse Coding? Have you thought about incorporating such ideas in the more general DGMs?
This is also true for A equalling the identity, right?
expansion imply that there is a unique solution?
Doesn't it reduce the task to finding right norm?
Does training an encoder end-to-end with the generative model (like VAEs) make the inversion process easier empirically?
Do we have good algorithms for getting the posterior on z, instead of a single z for inversion?
For the results discussed here, how well would you expect them to transfer to NLP?
Would love a link for ts!
Is there some analogue of invertibility for MCMC image generation?
And how would your method of determining quality of images be applied to MCMC images?
Can you please share your results on Time Series as you mentioned in your answer to the NLP question?
deep image prior for time series
One-dimensional Deep Image Prior for Time Series Inverse Problems
+1 super interesting!
Thank you for the talk Alex!