Projects



Transformer Chatbots



A wrapper around tensor2tensor to felxibly train, interact, and generate data for neural chatbots. Please also check the wiki page for my notes on over 150 papers.



Protein Structure Simulation and Searching Algorithms



Easily simulate the change of dipole moments of complex protein structures and provide a 3D UI in OpenGL. There are also several searching algorithms implemented to find protein structures that satisfy user defined logic functions. Download here.



Processing Demos



While learning processing from this book, I created a repository for my example demos. You can check two of them in the browser.

OpenGL game



A very simple platformer game in OpenGL. You can download it here.

Papers

The Gutenberg Dialogue Dataset

Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume (2021)

Richard Csaky, Gábor Recski

[Paper] [Code] [Demo] [Slides] [Poster]

Improving Neural Conversational Models with Entropy-Based Data Filtering

Proceedings of the 57th ACL (2019)

Richard Csaky, Patrik Purgai, Gabor Recski

[Paper] [Filtering Code] [Training Code] [Evaluation Code] [Poster (vertical)] [Poster (horizontal)] [Blog post] [Slides]

Proposal Towards a Personalized Knowledge-powered Self-play Based Ensemble Dialog System

arXiv (2019)

Richard Csaky

[Paper]

Deep Learning Based Chatbot Models

National Scientific Students' Associations Conference (2017)

Richard Csaky

[Paper] [Code]

Study of dipole-dipole coupled protein-based circuits using self-developed simulation software

Scientific Students' Associations Conference (2016)

Richard Csaky, Edvard Bayer

[Paper] [Code]

Presentations

EACL 2021

April 2021

The Gutenberg Dialogue Dataset

[Poster]

OUBT Biohackathon

March 2021

Brainstream: machine learning driven BCI that translates thoughts into text

EurNLP 2019

October 2019

Improving Neural Conversational Models with Entropy-Based Data Filtering

[Poster]

NLP for ConvAI workshop @ ACL

August 2019

Improving Neural Conversational Models with Entropy-Based Data Filtering

[Poster]

ACL 2019

July 2019

Improving Neural Conversational Models with Entropy-Based Data Filtering

[Talk]

EEML 2019

July 2019

Improving Neural Conversational Models with Entropy-Based Data Filtering

[Poster]

RAAI 2019

June 2019

Improving Neural Conversational Models with Entropy-Based Data Filtering

[Poster]

Hungarian NLP Meetup

May 2019

Neural Chatbots

[Slides]

Experience


  • Oct 2020 - Sep 2023
    PhD in AI + Computational Neuroscience @ University of Oxford

    https://www.psych.ox.ac.uk/team/richard-csaky-1/

  • Feb 2020 - Jun 2020
    Artificial Intelligence M.S. @ KU Leuven

    I participated in the Erasmus program at KU Leuven, taking 4 courses: Bioinformatics, Brain Computer Interfaces, Topics in Behavioural Neuroscience, Artificial Neural Networks and Deep Learning.


  • Jul 2019 - Jul 2019
    Eastern European Machine Learning Summer School

    A 1-week-long summer school that included a range of intensive lectures and practical sessions. I had the pleasure to attend lectures from top researchers in the field, such as Anca Dragan, Andrew Zisserman, Antoine Bordes, Doina Precup, Shimon Whiteson, and Rahul Sukthankar. I had in-depth courses about a remarkable breadth of topics in machine learning, such as reinforcement learning, computer vision, natural language processing, and Bayesian learning. The environment was diverse and enthusiastic, as the school had participants from over 50 countries.


  • Sep 2018 - Jun 2020
    Computer Science M.S. @ BME

    During my Master's I worked on dialogue modeling research, and took 6 extra B.S. level courses from the computer science curriculum. I participated in the National Excellence Program for 1 semester, a prestigious scholarship.


  • Feb 2018 - Oct 2019
    NLP Researcher @ BME

    During this time I was hired on-and-off to work on my research related to dialogue modeling. Based on more than 150 papers that I have read, I wrote a literature review paper that I presented at a national competition, winning first place. The GitHub repo associated with this project, where I also keep my notes on publications that I read, has over 400 stars. Since this paper, I have worked on a simple idea to make open-domain neural conversation models better. We hypothesized that it is hard for neural models to learn responses to open-ended utterances like ”What did you do today?” since dialog datasets contain many adequate replies to such inputs. Thus, we proposed a data filtering method where such utterances are excluded from the training set. I presented our results at ACL 2019, the leading international conference in NLP, and at other conferences and workshops, detailed in my CV.

    During the spring of last year, I gathered a small team consisting of three other students, I advised. Joining my dialog modeling endeavors, they worked on various ideas like adapting BERT and GPT-2 models to dialog modeling and using reinforcement learning to train chatbots. With this team, I applied to the Amazon Alexa prize, for which I had to write a detailed research proposal highlighting the achievements of my team and my ideas and vision for the project. Recently, I worked on a new, large, high-quality dialog dataset based on books from Project Gutenberg. I compared it to other large datasets in a transfer learning context. I found that pre-training on my dataset results in better performance, consolidating that it is of higher quality. I presented my results at an annual university conference, and plan to improve the dataset further and submit a paper to a major conference.


  • Jul 2017 - Aug 2018
    Software Engineer @ Robert Bosch GmbH

    I also explored other areas of deep learning, most notably during my internship and a later fulltime position at Robert Bosch GmbH. I worked on semantic segmentation models applying them to the task of segmenting free parking space. The dataset, however, was of poor quality, so I built a user interface, and with the help of a test driver, I gathered ten thousand labeled images. With my program, parking spots projected to the ground could be added and moved around on the live video of a car camera. Finally, I modified the YOLO architecture to be suitable for my task. I trained it on our new dataset achieving impressive results that convinced the department to give further funding to the project. Since I have been involved in this project from its conception to the demo phase, and I had to solve challenges across several domains (research, economic, engineering), I feel confident in my ability to tackle problems or roadblocks and not give up. I got along very well with people in my group and consulted several people from different groups (and countries) for advice or help.


  • Sep 2014 - Jan 2018
    Mechatronics Engineering B.S. @ BME

    During my B.S. I was a teaching assistant for 1 semester in electrical engineering labs. I also took part in a small (2- man) research project where I designed a program for simulating and experimenting with protein based logic circuits. I created a user interface in OpenGL for the simulation program, where users could build 3D molecular structures, and I implemented several algorithms (e.g., genetic algorithms) to search for molecular structures based on user-defined constraints.

Resume [download]