Skip to main content
Carsten Felix Draschner, PhD

What color is the Coke can in this picture? On Context and Neural Structures

Context and Neural Structures in LLMs

Image 1

TL;DR ⏱️

Background

🤬 We often blame AI systems for silly or incorrect predictions. Yet these models can only work with the context we supply, using parameters learned from vast internet corpora to map that context: system prompts, user prompts, or RAG chunks…to a “plausible” response.

🙈 Many people assume the Coke can is red, but it isn’t: there isn’t a single red pixel in the image! ➡️ Zoom in and see for yourself. 🚨

This raises an important parallel: humans and LLMs both work heavily on context and assumptions.

What have I done:

👩🏽‍💻 At Comma Soft AG, while deploying GenAI solutions such as Alan.de for our customers, we found it helps to explain how the underlying neural networks operate.

When people understand this better, they supply higher-quality data into their RAG pipelines and get much more useful text layers to query against.

IMHO:

🧠 Users often expect “magic” from AI, but both humans and LLMs rely on the quality of the given context.
✅ Better context = better predictions.
❓To what extent should we dive deeper into ML approaches to help users achieve more reliable results?

❤️ Feel free to reach out and like if you want to see more of such content.

#llm #artificialintelligence #explainableai #machinelearning