In this blog post, we illustrate how RLHF can be performed on Amazon SageMaker by conducting an experiment with the popular, open-sourced RLHF repo Trlx. Through our experiment, we demonstrate how RLHF can be used to increase the helpfulness or harmlessness of a large language model using the publicly available Helpfulness and Harmlessness (HH) dataset provided by Anthropic. Using this dataset, we conduct our experiment with Amazon SageMaker Studio notebook that is running on an ml.p4d.24xlarge instance. Finally, we provide a Jupyter notebook to replicate our experiments.
Source