Logo

This is a calendar meeting - Shared screen with speaker view
Boaz Barak
42:54
This this imply that DGM are closed under combinations? that is, given DGM G can we find G' that captures k sparese combinations of elements in the range?
Boaz Barak
52:51
Is efficiency the only potential advantage of the other models?
Anirudh
53:05
Were these images used in the training of the pre-trained deep generative models?
mn s
53:30
Don’t think so
Anirudh
53:56
Cool! Just checking :P
Manos Theodosis
58:16
I have a question; the initial intuition for sparsity is interpretability. Are you aware of a recent paper in UAI19 about Variational Sparse Coding? Have you thought about incorporating such ideas in the more general DGMs?
Boaz Barak
01:01:57
This is also true for A equalling the identity, right?
Boaz Barak
01:08:25
Yes
Boaz Barak
01:11:46
expansion imply that there is a unique solution?
Boaz Barak
01:21:07
Doesn't it reduce the task to finding right norm?
Yamini Bansal
01:30:05
Does training an encoder end-to-end with the generative model (like VAEs) make the inversion process easier empirically?
Yamini Bansal
01:32:35
yes thanks
Yamini Bansal
01:34:27
Do we have good algorithms for getting the posterior on z, instead of a single z for inversion?
Yamini Bansal
01:36:04
thanks!
mn s
01:36:07
For the results discussed here, how well would you expect them to transfer to NLP?
mn s
01:38:09
Would love a link for ts!
Adam Block
01:38:15
Is there some analogue of invertibility for MCMC image generation?
Adam Block
01:38:44
And how would your method of determining quality of images be applied to MCMC images?
B Math
01:38:46
Can you please share your results on Time Series as you mentioned in your answer to the NLP question?
Alex Dimakis
01:39:25
deep image prior for time series
B Math
01:40:09
One-dimensional Deep Image Prior for Time Series Inverse Problems
mn s
01:40:12
Url: https://arxiv.org/abs/1904.08594
mn s
01:40:35
+1 super interesting!
Manos Theodosis
01:40:38
Thank you for the talk Alex!