LLM prompting involves designing text inputs to generate a response from an LLM. The goal of prompting is to steer the behaviour of the LLM in a way that elicits a desired outcome. Recent research has focused on developing effective prompting techniques that can expand LLMs' capabilities when carrying out a variety of tasks. Examples include prompt patterns [21], in-context instruction learning [22], evolutionary prompt engineering [23] and domain-specific keywords with a trainable gated prompt to guide toward a target domain for general-domain LLMs [24]. Zhong et al. [25] experiment with prompting LLMs to do scientific tasks across fields like business, science, and health by providing the LLM with a research goal and two large corpora, asking the LLM for corpus-level difference. Reppert et al. [26] develop iterated decomposition, a human-in-the-loop workflow for developing and refining compositional LLM programs that improves performance on real-world science question and answer tasks.