My research interests lie in the field of conversational AI and dialog systems.
Particularly for low-resource, cross-lingual, multi-modal NLP and Speech Processing.
While in National Taiwan University and UC Berkeley,
I focused on BioMEMS applications in single cell analysis and cell-based data analysis.
Projects
I mainly focus on exploring deep learning technqiues to improve model's naturalness and robustness in spoken language understanding,
task-oriented dialogs, speech recognition and multi-modal conversational QA, etc.
I am also happy to work in Meta Reality Labs, Amazon Alexa Speech, Alexa Intelligent Decisions, and VMware.
Life
I also serve as program review committee in WACV, ECCV, EMNLP, AAAI, CVPR, ACL, Interspeech, etc.
I enjoy playing sports especially basketball and badminton, guitar and drums outside academics.
I am also have several leadership and extracurricular activity experiences in volunteering.
Generate medical descriptions for retinal images with keyword reinforced.
This is a collaborative project with published papers. See more in the following link in github pages.
Simulated the style drawing of Naruto figures to construct new naruto characters
completely by artificial intelligence with deep convolutional generative adversarial networks (GAN).
Classify and diagnose disease possiblity of a
malaria cell dataset with modified resnet50 structure implemented with pytorch, achieving testing
accuracy around 85%.
Trained machine to generate Chinese lyrics based on
dataset of composed songs from four popular singers scraped from internet
in Taiwan by pytorch with self-defined input words.
Describe a general image with 1-2 sentences by using datasets
from Google's Conceptual Captions Competition. Extract and train with ~50000 images to get decent
results.