Adjust GenAI responses towards more ethical behavior possible through system prompts!? Do you trust in such LLM-chat prepended pamphlets?
Exploring the potential and challenges of using system prompts to guide LLM behavior towards ethical outputs.
TL;DR ⏱️
- GenAI chat interactions often include system prompts
- System prompts aim to guide ethical LLM behavior
- Challenges exist in ensuring compliance and formulation
- Questions on designing and revealing system prompts
Background 🤓
- GenAI is often used via chat
- We enter prompts in the Chat UI
- Under the hood, the Chat programs usually add a system prompt
System-Prompt 💬
- This system prompt should control the behavior of the LLM output
- The system prompt complements other behavioral optimizations such as alignment and other guardrails to improve ethical GenAI response
- The system prompt can describe behavior such as identity or desired behavior
Problem 🥴
- There is no guarantee to what extent the system prompt will be followed
- It is unclear what details should be included in the system prompt
- It is not possible to prevent the model from revealing the system prompt
- It is unclear how best to formulate the system prompt
Questions 🤔
- How do you design system prompts, and what are yours?
- Do you think system prompts help to ensure the model's behavior?
- Do you think chat UIs should not reveal the system prompt and how would you achieve this?
Further Reading 📖
- Change Alignment of LLMs: https://lnkd.in/eWS-VZCD
- LLM Behavior: https://lnkd.in/ed52RBNe
- My current LLM Developments: https://lnkd.in/ey8aZTjB
We at Comma Soft AG develop GenAI solutions including optimizations of LLM responses through various components where also the design of system prompts plays a role.
#artificialintelligence #responsibleai #llm #alan #aiethics