Berkeley NLP is a group of faculty and graduate students working to understand and model natural language. We are a part of Berkeley AI Research (BAIR) inside of UC Berkeley Computer Science. We use techniques in machine learning, linguistics, deep learning, and statistics to address research questions in the following areas:
- Structured Prediction and Parsing: Converting natural language into higher-level, semantically-meaningful abstractions is essential for downstream tasks. We create systems that convert natural language into source code and parse trees. Play with our neural parser!
- Grounding Language into Vision and Instructions: Language is often meaningless without real-world grounding. We combine NLP methods with computer vision models to follow instructions and navigate the world, as well as reinforcement learning techniques to learn reusable and modular plans.
- Model Robustness, Security, and Privacy: Neural NLP models are increasingly deployed into production systems---but, how do models perform in dynamic, adversarial, and safety-critical environments? We study areas such as adversarial attacks, privacy-preserving NLP systems, and out-of-domain generalization.
- Computational Linguistics and Social Science: We use algorithmic techniques to study language and society. For example, automatically reconstructing ancient languages and deciphering historical documents.
- Neuroscience and Psycholinguistics: How does the brain learn and acquire language? We study techniques at the intersection of neuroscience and linguistics---connecting recent advances in both “artificial” and “real” neural networks.
- Compositional and Modular Model Architectures: Compositionality is a core aspect of language. What sort of inductive biases do models need in order to learn compositional representations? We create models with modular architectures that learn compositional representations from text and images.
- Model Analysis, Inspection, and Interpretation: Although neural NLP models are increasingly accurate, they are opaque “black-boxes”. How can we help humans peer into this black-box? We work to explain model predictions, communicate model representations to humans, and analyze and inspect model behavior.