[ad_1]
Join us this week for a fascinating conversation on the What’s AI podcast, where Yotam Azriel, co-founder of TensorLeap, shares his fascinating journey, insights, and vision for the future of Explainable AI. Discover what the power of passion, curiosity, and focus can accomplish! Yes, I didn’t mention school or university voluntarily, as Yotam is a successful data scientist… without any formal university degree!
Here are some insights from this week’s episode before you commit to a knowledge-packed hour-long discussion…
Yotam Azriel, despite not following a traditional academic path, embarked on a scientific adventure at a young age, exploring fascinating realms such as magnetic fields in physics, wireless charging technology, and AI. These diverse experiences shaped his knowledge and prepared him for his entrepreneurial endeavors.
What sets Yotam apart is his approach to learning: hands-on experience! By immersing himself in unfamiliar domains and clear expectations, goals, and deadlines, he acquires essential skills while staying firmly focused on his objectives. This method offers a relatable and effective alternative for self-learners like you (if you are reading this!).
We dive into the world of the AI startup TensorLeap — an applied explainability platform that empowers data scientists and developers working with AI models. By tackling the challenge of understanding complex AI behavior, TensorLeap is revolutionizing the landscape of explainable AI, an exciting new field with lots to discover.
Gain a deeper understanding of Explainable AI, where the goal is to clarify decision-making for users and utilize mathematical techniques to comprehend neural networks. TensorLeap focuses on the latter, providing valuable insights into AI systems.
Looking to enter the field of AI? Yotam Azriel’s advice is to pursue something applied and tangible! By setting clear goals with real-world outcomes, you’ll find the motivation needed to learn and thrive in the exciting world of artificial intelligence.
Don’t miss this captivating podcast episode with Yotam Azriel as our special guest, interviewed by me (Louis Bouchard) for the What’s AI podcast. Tune in on Apple Podcast, Spotify or on YouTube and expand your knowledge of Explainable AI!
Here are the questions and timestamps to follow along or jump right to the question that interests you most…
00:26 Who are you and what’s your background?
04:39 And how did you find that job?
06:14 What would you recommend to someone from a different background who wants to get into the field?
08:23 So would you recommend having some sort of incentive to learn?
10:05 Why do you assume you’re not cut off for academia if you like writing and reading papers?
15:00 What is Tensorleap and what do you do there?
21:28 What is Explainability AI and please give an example of a technique?
27:35 Could you give an example of how an explainability AI technique could help improve or understand the SAM model?
31:39 Would the same explainability AI technique work for different architectures?
36:30 Can we do anything to understand the results of the model better when we work with pre-trained models?
40:20 What are the basics for someone new to the field who wants to get into explainability AI and use it to better understand their model and its results?
42:48 Should everyone creating AI models also be familiar with explainability and be able to understand the decisions of the models they create?
45:21 If the model’s explanation is right or not you might need an expert in the specific field?
52:20 Is there a way to better understand models that are used through API’s, their decisions and why they answer the way they did?
56:15 Is the ultimate goal for explainability AI to do anything it can to understand the AI model?
58:50 Did you search into neurosciences and try to implement some of their ideas on artificial networks and vice versa?
01:01:20 How do you see the explainability AI field in 5 years?
01:03:27 What are the possible risks with explainability AI?
01:04:56 Who should use Tensorleap and why?
[ad_2]
Source link