posted on 2023-11-06, 11:49authored byPiotr Sawicki, Marek Grzes, Luis Fabricio Góes, Dan Brown, Max Peeperkorn, Aisha Khatun
This study examines the ability of GPT-3.5, GPT-3.5-turbo(ChatGPT) and GPT-4 models to generate poems in the styleof specific authors using zero-shot and many-shot prompts(which use the maximum context length of 8192 tokens). Weassess the performance of models that are not fine-tuned forgenerating poetry in the style of specific authors, via auto-mated evaluation. Our findings indicate that without fine-tuning, even when provided with the maximum number of17 poem examples (8192 tokens) in the prompt, these modelsdo not generate poetry in the desired style.
History
Author affiliation
School of Computing and Mathematical Sciences, University of Leicester
Source
14th International Conference on Computational Creativity 2023
Version
AM (Accepted Manuscript)
Published in
International Conference on Computational Creativity