Logo

Harvard ML theory talk - Percy Liang - Shared screen with speaker view
Sidak Pal Singh
27:35
Does it maintain the nominal accuracy when you try to minimize the upper bound?
Yamini Bansal
28:26
Standard acc is about 35%
Yamini Bansal
28:29
for linear
John
43:01
Given that the drop in accuracy between robust and non-robust model (if I'm not mistaken in all the examples) is quite large, if I'm allowed to ask the stupid question do we have the right definition of robustness to start with?e.g. Can we exclude the fact that high accuracy of non-robust models is not based on an artifact of training?
Sidak Pal Singh
44:04
Can I ask another question?
Boaz Barak
44:31
Maybe we can wait till next pause? if it's OK? otherwise feel free to ask
Boaz Barak
44:59
But everyone - if you have a question on the current slide - please ask
Sidak Pal Singh
45:09
It’s on the previous slide
Sidak Pal Singh
45:14
So I can ask later then
John
56:14
What is the key idea in the slide with the cubic splines? It seems like complexity of the model class might be one of the reason for non-robustness?
Sidak Pal Singh
58:00
On this cubic splines slide, I have another question. Even for iid generalisation, all bounds are with high probability over the choice of training set. So say if in your case, you took the expectation over the choice of training set, I guess that might reveal better about the robust accuracy.
Rabiul Awal
59:27
The findings from the slide where training robust model requires more data than standard model. Is there any recipe from few-shot learning can help in this regard with less data? In general, any idea from few-show learning helps to achieve robustness?
Sidak Pal Singh
01:02:09
Can I maybe discuss/ask another thing about your answer on the comment I made?
John
01:02:21
Isn't regularisation the same as controlling model complexity, which we also do via data augmentation? So, if the cubic splines in the 2d figure has regularisation would that stabilise extrapolation to new data points along the blue line similar to std training?
Boaz Barak
01:24:44
Blog link https://ai.stanford.edu/blog/removing-spuriousfeature/
Sidak Pal Singh
01:27:41
On the spurious features slide: Just to clarify, is your linear regression model regularised (such as l2 norm) or not?
Boaz Barak
01:28:05
Let's wait with more questions to the end - thanks!
Lucas B Janson
01:35:20
I’ve got to go at 2:30, but thanks for the great talk!