News

Students often train large language models (LLMs) as part of a group. In that case, your group should implement robust access ...
Large language models are infamous for spewing toxic biases, thanks to the reams of awful human-produced content they get trained on. But if the models are large enough, and humans have helped ...
A new study by THWS shows how language models such as ChatGPT exhibit systematic gender bias in everyday interactive ...
More information: Valentin Hofmann et al, Derivational morphology reveals analogical generalization in large language models, Proceedings of the National Academy of Sciences (2025). DOI: 10.1073 ...
For example, large language models (LLM) are well-known for not being particularly good at arithmetic. Toolformer can work around that limitation by using a calculator program.
For example, if you trained a language model on average reviews of the movie “Waterworld,” you would have a result that is good at writing (or understanding) ...
For example, language models that help in programming tasks, often referred to as “coding copilots,” have become an important development tool in many companies.