סוג האירוע

בחר הכל

הרצאות פומביות

קולוקוויום

סמינרים

כנסים וימי עיון

מועדון IAP

Implicit Regularization in Deep Learning: Lessons Learned from Matrix and Tensor Factorization

Dr. Nadav Cohen

23 במאי 2021, 16:00 
ZOOM 
קולוקוויום במדעי המחשב

Abstract

.Understanding deep learning calls for addressing three fundamental questions: expressiveness, optimization and generalization

 

Expressiveness refers to the ability of compactly sized deep neural networks to represent functions capable of solving real-world problems. Optimization concerns the effectiveness of simple gradient-based algorithms in solving non-convex neural network training programs

 

Generalization treats the phenomenon of an implicit regularization preventing deep learning models from overfitting even when having much more parameters than examples to learn from

 

.This talk will describe a series of works aimed at unraveling some of the mysteries behind generalization

 

Appealing to matrix and tensor factorization, I will present theoretical and empirical results that shed light on both implicit regularization of neural networks and the properties of real-world data translating it to generalization

 

 

Zoom link: https://zoom.us/j/2016926425

 

Works covered in the talk were in collaboration with Sanjeev Arora, Wei Hu, Yuping Luo, Asaf Maman and Noam Razin

אוניברסיטת תל אביב עושה כל מאמץ לכבד זכויות יוצרים. אם בבעלותך זכויות יוצרים בתכנים שנמצאים פה ו/או השימוש
שנעשה בתכנים אלה לדעתך מפר זכויות, נא לפנות בהקדם לכתובת שכאן >>