<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.9.2">Jekyll</generator><link href="https://evandez.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://evandez.com/" rel="alternate" type="text/html" /><updated>2022-04-23T01:12:41+00:00</updated><id>https://evandez.com/feed.xml</id><title type="html">Evan Hernandez</title><author><name>Evan Hernandez</name></author><entry><title type="html">Language Explanations of Neurons</title><link href="https://evandez.com/2022/04/23/proj-milan.html" rel="alternate" type="text/html" title="Language Explanations of Neurons" /><published>2022-04-23T00:00:00+00:00</published><updated>2022-04-23T00:00:00+00:00</updated><id>https://evandez.com/2022/04/23/proj-milan</id><content type="html" xml:base="https://evandez.com/2022/04/23/proj-milan.html">&lt;p&gt;We present a procedure to automatically generate natural language descriptions
of neurons in computer vision models. These generated descriptions support
important interpretability applications: we use them to analyze neuron importance,
identify adversarial vulnerabilities, audit for unexpected features,
and edit out spurious correlations.&lt;/p&gt;</content><author><name>Evan Hernandez</name></author><summary type="html">We present a procedure to automatically generate natural language descriptions of neurons in computer vision models. These generated descriptions support important interpretability applications: we use them to analyze neuron importance, identify adversarial vulnerabilities, audit for unexpected features, and edit out spurious correlations.</summary></entry><entry><title type="html">MIT Summer Research Program</title><link href="https://evandez.com/2021/06/01/teach-msrp.html" rel="alternate" type="text/html" title="MIT Summer Research Program" /><published>2021-06-01T00:00:00+00:00</published><updated>2021-06-01T00:00:00+00:00</updated><id>https://evandez.com/2021/06/01/teach-msrp</id><content type="html" xml:base="https://evandez.com/2021/06/01/teach-msrp.html">&lt;p&gt;I had the pleasure of mentoring an MSRP summer intern on a research project. She developed a language-based image editing tool for images generated by GANs.&lt;/p&gt;</content><author><name>Evan Hernandez</name></author><summary type="html">I had the pleasure of mentoring an MSRP summer intern on a research project. She developed a language-based image editing tool for images generated by GANs.</summary></entry><entry><title type="html">Low-Dimensional Probing</title><link href="https://evandez.com/2021/01/01/proj-low-dim-probes.html" rel="alternate" type="text/html" title="Low-Dimensional Probing" /><published>2021-01-01T00:00:00+00:00</published><updated>2021-01-01T00:00:00+00:00</updated><id>https://evandez.com/2021/01/01/proj-low-dim-probes</id><content type="html" xml:base="https://evandez.com/2021/01/01/proj-low-dim-probes.html">&lt;p&gt;How do word representations geometrically encode linguistic abstractions like part of speech? We find that many linguistic features are encoded in &lt;b&gt;low-dimensional subspaces&lt;/b&gt; of contextual word representation spaces, and these subspaces can causally influence model predictions.&lt;/p&gt;</content><author><name>Evan Hernandez</name></author><summary type="html">How do word representations geometrically encode linguistic abstractions like part of speech? We find that many linguistic features are encoded in low-dimensional subspaces of contextual word representation spaces, and these subspaces can causally influence model predictions.</summary></entry><entry><title type="html">Visual Concept Vocabulary for GANs</title><link href="https://evandez.com/2021/01/01/proj-visual-vocab.html" rel="alternate" type="text/html" title="Visual Concept Vocabulary for GANs" /><published>2021-01-01T00:00:00+00:00</published><updated>2021-01-01T00:00:00+00:00</updated><id>https://evandez.com/2021/01/01/proj-visual-vocab</id><content type="html" xml:base="https://evandez.com/2021/01/01/proj-visual-vocab.html">&lt;p&gt;GANs sometimes encode visual concepts in their latent space as &lt;b&gt;linear directions&lt;/b&gt;.
We construct a &lt;b&gt;visual concept vocabulary&lt;/b&gt; for pretrained GANs, consisting of latent directions
and free-form language descriptions of the changes they induce. We then distil the vocabulary into simpler,
one-word visual concepts (e.g., &lt;i&gt;snow&lt;/i&gt; or &lt;i&gt;clouds&lt;/i&gt;).&lt;/p&gt;</content><author><name>Sarah Schwettmann</name></author><summary type="html">GANs sometimes encode visual concepts in their latent space as linear directions. We construct a visual concept vocabulary for pretrained GANs, consisting of latent directions and free-form language descriptions of the changes they induce. We then distil the vocabulary into simpler, one-word visual concepts (e.g., snow or clouds).</summary></entry><entry><title type="html">6.864: Advanced Natural Language Processing</title><link href="https://evandez.com/2021/01/01/teach-advanced-nlp.html" rel="alternate" type="text/html" title="6.864: Advanced Natural Language Processing" /><published>2021-01-01T00:00:00+00:00</published><updated>2021-01-01T00:00:00+00:00</updated><id>https://evandez.com/2021/01/01/teach-advanced-nlp</id><content type="html" xml:base="https://evandez.com/2021/01/01/teach-advanced-nlp.html">&lt;p&gt;MIT’s primary NLP course, typically taken after a first course in ML. I wrote homework assignments, planned recitations, and led weekly office hours.&lt;/p&gt;</content><author><name>Evan Hernandez</name></author><summary type="html">MIT’s primary NLP course, typically taken after a first course in ML. I wrote homework assignments, planned recitations, and led weekly office hours.</summary></entry><entry><title type="html">Undergraduate Learning Center</title><link href="https://evandez.com/2018/01/01/teach-ulc.html" rel="alternate" type="text/html" title="Undergraduate Learning Center" /><published>2018-01-01T00:00:00+00:00</published><updated>2018-01-01T00:00:00+00:00</updated><id>https://evandez.com/2018/01/01/teach-ulc</id><content type="html" xml:base="https://evandez.com/2018/01/01/teach-ulc.html">&lt;p&gt;For three years, I tutored underrepresented students in UW-Madison engineering programs on introductory computer science and math classes. I also developed tutoring software to support the tutoring by request and drop-in tutoring services.&lt;/p&gt;</content><author><name>Evan Hernandez</name></author><summary type="html">For three years, I tutored underrepresented students in UW-Madison engineering programs on introductory computer science and math classes. I also developed tutoring software to support the tutoring by request and drop-in tutoring services.</summary></entry></feed>