Open in app

Sign In

Write

Sign In

Samuel Kierszbaum, PhD
Samuel Kierszbaum, PhD

42 Followers

Home

About

Jan 6

Data science & Meditation: a love story

Why data scientists can benefit from incorporating meditation into their routine In a nutshell In this article, I propose the idea that meditating regularly leads to the development of desirable traits in the context of data science work. This idea is developed in four sections. A meditation protocol: I provide a meditation protocol that can be applied by anyone that is interested in giving meditation…

Data Science

4 min read

Data science & Meditation: a love story
Data science & Meditation: a love story
Data Science

4 min read


Jul 31, 2020

F1 score in NLP span-based Question Answering task

In the context of span-based Question answering, we are going to look at what the F1 score means. Let us first give a few definitions. Span-based QA is a task where you have two texts, one called the context, and another called the question. …

QA

3 min read

F1 score in NLP span-based QA task
F1 score in NLP span-based QA task
QA

3 min read


Jul 30, 2020

Pointer networks : What are they?

In this article, my aim is to give an explanation of what pointer networks are, as defined in the article “Pointer Networks” (this is the article describing pointers network for the first time I believe), and why they are used. This work is done in the context of my PhD…

Pointer Networks

7 min read

Pointer networks : What are they?
Pointer networks : What are they?
Pointer Networks

7 min read


Published in Analytics Vidhya

·Jan 27, 2020

Masking in Transformers’ self-attention mechanism

Masking is needed to prevent the attention mechanism of a transformer from “cheating” in the decoder when training (on a translating task for instance). This kind of “ cheating-proof masking” is not present in the encoder side. I had a tough time understanding how masking was done in the decoder…

Machine Learning

4 min read

Masking in Transformers’ self-attention mechanism
Masking in Transformers’ self-attention mechanism
Machine Learning

4 min read


May 7, 2019

Safety I vs Safety II

Note: This article purpose is to give the reader interested in safety a brief summary of the book “Safety-I and Safety-II: The Past and Future of Safety Management” written by Erik Hollnagel. I obviously read the book before writing it. …

Safety

4 min read

Safety I vs Safety II
Safety I vs Safety II
Safety

4 min read

Samuel Kierszbaum, PhD

Samuel Kierszbaum, PhD

42 Followers

Data scientist

Following
  • Sean Kernan

    Sean Kernan

  • Priyansh Khodiyar

    Priyansh Khodiyar

  • Dia Trambitas

    Dia Trambitas

  • Christopher Tao

    Christopher Tao

  • Vincent Fortuin

    Vincent Fortuin

Help

Status

Writers

Blog

Careers

Privacy

Terms

About

Text to speech