Will we reach AGI, and if so, by transformers-based architectures? Share your perspective!
Exploring the potential of transformers-based architectures in achieving Artificial General Intelligence (AGI) and the ongoing debate surrounding it.
TL;DR ⏱️
- GenAI's impact on AGI discussions
- Technical challenges with transformer-based architectures
- Optimistic yet cautious approach at Comma Soft AG
Why GenAI boosted the discussions around AGI! 🤯
- Several institutions are claiming that our GenAI-based architectures are a key step on our way to AGI.
- On the other hand, we are facing major technical issues with those systems that seem to not allow AGI by design: see stochastic parrots.
- But still, some, like Ilya Sutskever, claim that transformer-architecture AIs can reach AGI-like capabilities as it does not only learn next token prediction, it learns a projection space for real-world understanding.
Further links 📖
- Critics on GenAI/LLMs: https://lnkd.in/e2JqstpW
- Ilya (OpenAI) Interview: https://lnkd.in/etu4EKQT
We at Comma Soft AG are developing GenAI models being optimistic about the opportunities and seeing that GenAI is already helping in our tools and pipelines while being aware of all the current issues by design of these approaches (e.g., LLM hallucination) we are actively tackling.
Share your perspective in the comments 🤗 For more content, follow me on LinkedIn ❤️
#LostInGenai #artificialintelligence #selectllm
- ← Previous
Do you differentiate AI Ethics principles between AI/ML fields like GenAI/LLMs or Knowledge Graph-based ML? How do we deal with so-called facts on the internet as training data? - Next →
Do we need another GenAI solution? How & why we developed a full-stack GenAI LLM+RAG tool called Alan. A sneak peek at what I am currently working on.